GlobalFoundries to Host Investor Webinar on Silicon Photonics and Advanced Packaging February 13, 2026
Powering the Physical AI Era: How GlobalFoundries Enables Real-Time Machines That Sense, Think, Act, and Communicate February 13, 2026 Ed Kaste, SVP of Ultra-Low Power CMOS Business, GlobalFoundries Today, Physical AI is already taking shape in the real world—appearing in everything from self‑driving vehicles navigating cities from San Francisco to Shenzhen, to autonomous robots operating in industrial warehouses and drones delivering packages. But the future of Physical AI will extend even further, spanning everything from humanoid robots to autonomous imaging systems in healthcare and a wide range of other real‑world applications. This next phase of AI is bringing AI beyond data centers and directly into the physical world in the form of machines that interact with their environments in real time. However, delivering these capabilities at scale introduces a new set of constraints and opportunities for semiconductor technology. Multi-modal sensing, distributed intelligence, actuation, and power efficiency become as critical as performance itself. Purpose-built semiconductor platforms are the foundation that will enable Physical AI to move from early adoption to widespread deployment. Purpose-built semiconductor platforms for Physical AI Physical AI is introducing broader workloads that are reshaping the requirements for semiconductors. The requirements of Physical AI are creating a massive opportunity for GF to deliver reliable, energy-efficient, highly integrated platforms that can adapt over time. Here’s how our platforms are enabling this next wave of Physical AI: GF’s industry-leading FDX platform is ideally suited for applications in Physical AI that are optimized for long battery life in small form factors, thanks to its ultra-low power and low leakage capabilities, superior RF performance, integrated power management and highly reliable operation up to 150 degrees Celsius. GF’s differentiated FinFET platform provides increased performance at the right power profile, fully optimized for integrated solutions, enabling efficient sensing, real-time processing, and seamless communication in real-world environments. Memory solutions including MRAM and RRAM offer embedded non-volatile memory options with low power consumption and the fastest access times in the market, allowing customers to build differentiated systems from scratch with pre-validated memory IP. This is critical to future-proof Physical AI designs as traditional memory scaling faces both physical and economic limits. Silicon photonics and RF innovation are driving high-speed connectivity by increasing speed and bandwidth of interconnects within and outside of the application, to communicate reliably across billions of devices at the lowest possible power. Advanced packaging and heterogeneous integration further enable Physical AI by bringing together diverse technologies—compute, memory, RF and power—into compact, efficient systems optimized for distributed deployment. The real-time operating model behind Physical AI As AI undergoes this fundamental shift to be present in the real world, applications in Physical AI must respond in real time to the environment around it. In our last blog, our Chief Business Officer, Mike Hogan, introduced a simple but powerful framework that defines how Physical AI functions: Sense – Think – Act – Communicate. Sense: Capture data from the physical environment using multimodal sensors such as audio, haptics, optical, radar, and environmental sensors. Think: Process and interpret that data locally to make real-time decisions, in deterministic, safe, and secure way. Act: Execute precise, timely actions through motors or actuators with precision feedback loops. Communicate: Exchange data reliably and securely across distributed systems, from edge to cloud and across devices. However, any weakness— whether in latency, power efficiency, security or reliability—can degrade overall system performance. That’s why looking ahead, Physical AI systems will become more customized and adaptive, to optimize not just for compute but for real-world operations over long lifecycles. Overcoming power and latency constraints of Physical AI Power and latency are fundamental system-level constraints that shape what is possible in Physical AI. These applications operate continuously in confined thermal environments, often times without direct access to abundant energy, while simultaneously requiring real-time responsiveness. As semiconductor content increases, inefficient power consumption and excessive latency can limit performance, reduce reliability and shorten operational life. Optimizing for power efficiency and ultra-low latency enables Physical AI systems to do more with less under power, thermal and computing constraints. This makes innovating semiconductor platforms essential to scaling Physical AI beyond pilots, and eventually into mission-critical environments. Enabling software-defined, distributed intelligence As Physical AI systems evolve, architectures are shifting away from centralized compute toward distributed intelligence. Rather than sending all data to the cloud or a single processor, intelligence is being placed at the interface with the real world, so that they are closer to where data is generated and actions are taken. Software-defined architectures play a key role in this transition. By decoupling hardware from software, developers can continuously upgrade features and have the flexibility to support evolving AI models without having to redesign the actual hardware. This is especially critical in long-lived systems such as vehicles, industrial equipment and robotics platforms. Physical AI today: Software defined vehicles One of the most visible examples of Physical AI today is the software-defined vehicle (SDV). Today’s modern vehicles integrate hundreds of chips to support advanced driver assistance systems (ADAS), infotainment, connectivity and battery management. However, as autonomy, electrification and connectivity accelerate, semiconductor content per vehicle continues to rise. In just the last five years alone, the average semiconductor content per vehicle has risen from $700 to $1,000 and S&P Global Mobility estimates this number to continue growing to approximately $1,400 through the end of the decade. These systems rely on high-performance sensors, real-time processing and precise actuation to improve automotive safety and user experience—all while operating under strict power and thermal constraints. Physical AI tomorrow: Humanoid robots The same principles extend into emerging humanoid systems, which need even higher degrees of flexibility to support evolving AI models, sensor fusion algorithms and autonomy stacks. That’s because humanoid robots require multimodal sensing to perceive their environments, distributed intelligence to process data with ultra-low latency and precise motor control to execute fluid, human-like motion in real-time with dozens of degrees of freedom. It’s no surprise that a high-end industrial humanoid has semiconductor content that exceeds SDVs by up to four times. These growing silicon footprints make one thing clear: Scaling Physical AI will depend on platforms that can deliver real-time performance within tight power, thermal and reliability limits. Building the foundation for the Physical AI future As the Physical AI wave pushes intelligence from the cloud into the physical world, success is no longer defined by raw compute alone, but by the ability to deliver reliable, energy-efficient, and adaptable systems at scale. At GF, we’re continuously looking for opportunities to enhance our technology platform for this future designed for sensing, real-time decision-making, actuation and communication. Following our recent acquisition of MIPS, we’ve layered our platforms with MIPS’ suite to better target the growing Physical AI opportunity. In the next installment of this blog, we’ll chat with MIPS CEO, Sameer Wasson, on how we’ve combined MIPS’ architecture, IP & design with GF’s optimized process technologies to advance compute workloads and deliver the deterministic real-time performance that Physical AI requires. Right click to save the high resolution image Ed Kaste is Senior Vice President of Ultra‑low Power Business at GF, where he leads the company’s ultra‑low power platform strategy enabling differentiated solutions across smart mobile, IoT, automotive, communications infrastructure, data center, and aerospace and defense markets. Previously, he held senior leadership roles spanning product management, IoT, and the FDX™ business, with a focus on driving growth through application‑driven semiconductor innovation. He joined GlobalFoundries in 2015 following leadership roles in semiconductor research, development, and manufacturing at IBM.
Sensor fusion in action: How cameras and LiDAR integrate with radar for safer driving January 29, 2026 By Yuichi Motohashi. Dep. Director / Global Segment Lead, Automotive Display, Camera, LiDAR & SerDes, GlobalFoundries Sense – analyze – act. This is the principle that advanced driver assistance systems (ADAS) operate on. Modern vehicles rely on a network of sensors to build a more precise, reliable perception of their surroundings. Sensor fusion combines these inputs – from radar, camera, LiDAR, and ultrasound – with artificial intelligence and deep learning to deliver the environmental acuity required for vehicles to make split-second decisions. Since 1999, when Mercedes-Benz “taught the car to see,” radar has been a proven cornerstone of ADAS. However, camera and LiDAR technologies are rapidly advancing, adding new levels of detail and depth to a vehicle’s perception. LiDAR in particular has long been stuck in the space between functional solutions and scalable manufacturing. GF is closing that gap, using FinFET, advanced packaging and photonics to unlock the path to mass-market viability. Together, complementary sensors provide high-resolution imagery, 3D mapping and object classification capabilities – each essential for the safer driving of today, and the fully autonomous mobility of tomorrow. Cameras: Sharpening your car’s view of the world Cameras capture high-quality images around cars to detect lane markings, speed limits, turn signals, pedestrians and more. Sophisticated algorithms analyze images taken by cameras to determine the distance, size, and speed of objects, enabling the system to react appropriately. Automotive cameras do not utilize ultra-high megapixel counts like mobile phones because additional pixels result in increased data for the vehicle’s computer system to process. Producing extremely high-resolution images would significantly expand the volume of data transmitted to the central processor, potentially exceeding the capabilities of System on Chips (SoCs) that must analyze this information instantaneously to ensure safety. Excessive data could hinder processing speeds or overwhelm the system. Consequently, it is essential to carefully balance detection distance with the processing power required by the central SoC. The primary image quality Key Performance Indicator (KPI) is dynamic range, which is vital for maintaining accuracy in difficult lighting and weather conditions—ranging from intense sunlight at dusk to darkness, heavy rainfall, or fog. Achieving such high dynamic range imaging necessitates increasingly sophisticated Read-out ICs (ROIC) within automotive stacked CMOS Image Sensors (CIS). There exists a direct relationship between system-level, circuit-level and transistor-level requirements for high-performance automotive CIS ROIC. System-level Enhanced resolution (from 8MP to 12–16MP), frame rate (≥30fps), and dynamic range (≥130dB) are necessary, collectively increasing the processing load on the ROIC. Transmission bandwidth of at least 6Gbps is essential, underscoring the need for SerDes integration. Long‑range detection depends on high pixel resolution, high-speed operation and minimal read noise (including 1/f and RTS noise). Improved low‑light performance requires minimizing both ADC and transistor noise. Circuit-level To accommodate high bandwidth, circuits must achieve elevated clock speeds, low jitter and reduced noise. Die size limitations call for high capacitor density, robust transconductance (gm) and efficient logic cell area usage. Reliable functionality at temperatures up to 125°C demands low leakage characteristics. Transistor-level High-speed operation mandates transistors with superior Ft/Fmax and low-noise characteristics. Consistent performance at elevated temperatures relies on effective leakage control and optimized transistor density. Images captured by vehicular cameras underpin many Advanced Driver-Assistance Systems (ADAS) features, such as lane departure warnings, collision avoidance and parking assistance, making them integral to contemporary automotive safety solutions. GF’s advanced technology platform continues to facilitate the development of state-of-the-art automotive CIS solutions. LiDAR: Mapping the roads in 3D If cameras are the car’s eyes, LiDAR adds depth perception. Instead of 2D images, LiDAR emits laser pulses and measures their return to generate a 3D point cloud of the surroundings. By doing this, LiDAR generates a detailed 3D map of the world around your vehicle. This is ultimately how the car knows the difference between a pedestrian, a bicyclist, an animal, another car or a garbage can. Take Aurora, the driverless commercial self-driving truck service. Its long-range lidar detects objects in the dark of night over 450 meters away, even identifying objects as quickly as 11-seconds sooner than a traditional driver would. This precise 3D vision powers today’s ADAS features, like lane keeping, pedestrian detection and adaptive cruise control, and is laying the foundation for full self-driving functionality in the future. Key figures of merit for automotive LiDAR systems Detection range and accuracy Long‑range LiDAR must exceed 300 m detection distance. Field of View (FoV) Short‑range LiDAR Horizontal FoV target ~150° Vertical FoV: 20–30° Angular resolution: Long‑range: 0.1–0.15° Short‑range: 0.6° Distance resolution/ranging accuracy Target improvement to around 5 cm accuracy Frame rate Increased target: 30 fps Point rate dToF: Increase to ~10 M pts/sec FMCW: Expected ~2 M pts/sec Power consumption System-level power target: << 20 W How GlobalFoundries powers smarter sensors GF is at the forefront of advancing both camera and LiDAR technologies, delivering solutions that improve performance, integration, and efficiency. For camera, the image sensor is the core component that determines the performance of automotive cameras. GlobalFoundries delivers advanced Readout IC (ROIC) solutions for stacked CMOS Image Sensors (CIS), utilizing industry-leading 40nm and 22nm process nodes to meet the demanding requirements of next-generation automotive applications. 40nm and 22nm platforms provide low-noise performance for analog circuits and low power consumption even under extreme automotive high temperatures. In particular, 40nm-equipped image sensor has great image quality and high reliability, while 22nm based platform also offers outstanding signal processing capabilities, low-power operations. Some of the benefits are: Higher resolution and improved dynamic range: GF’s solutions enable image sensors to capture higher resolution images with higher dynamic range, by enabling faster, low noise A/D conversion with lower power consumption System integration: Integrating essential components like memory, ISP (Image signal processor), analog and high-speed interface onto a single chip simplifies the complexity of ADAS. With cameras generating and processing high volumes of data, Serializer/Deserializer technology converts data into a fast, streamlined stream, sends it over just single wires, and then converts it back for processing. GF is playing an active role in the OpenGMSL alliance and supporting SerDes-integrated smart sensors. For LiDAR, GF’s silicon photonics on the 45SPCLO platform could integrate laser source, light emitter, receiver and signal processing on a single chip, reducing LiDAR size and making it easier to fit into vehicles. Working with both O-band and C-brand wavelengths, the platform also uses a special silicon nitride (SiN) waveguide to achieve best-in-class propagation loss properties. In addition, GF’s HP silicon germanium (SiGe) is the gold standard for image quality in high-performance LiDARs, and offers unparalleled response times for transimpedance amplifiers to process signals and detect objects faster. Advantages include: Miniaturization: Integrating multiple optical components onto one chip results in more cost-efficient, compact LiDAR systems. Developing highly integrated, true solid-state FMCW LiDAR results in lower manufacturing costs, making LiDAR more accessible. Electronics integration: Combining SiPh with CMOS electronics enables enhanced signal processing for smarter, more capable sensors. The rise of cameras and LiDAR to steer the future of autonomous driving Radar, cameras and LiDAR each shine on their own, but they need to work in concert when it comes to making cars smarter and safer. GF’s technology sits at the heart of fusing these sensors, helping cars on the road to see farther, react quicker and make smarter decisions in the blink of an eye. While cameras and LiDARs are more emerging technologies in the automotive industry, there’s massive potential to advance their performance and integration. GF is empowering automakers to accelerate the deployment of safer, smarter and more autonomous vehicles. Author bio Yuichi Motohashi is the Deputy Director of End Markets at GlobalFoundries, responsible for leading the global segment in automotive cameras, LiDAR, SerDes and displays, which facilitate next-generation ADAS, autonomous driving and enhanced in-cabin experiences.