March 9, 2026 As Physical AI moves from concept to deployment, intelligence must be distributed across machines at the edge, in real time and under strict power, latency, safety and cost constraints. To prepare for the Physical AI era to take off, GF acquired MIPS, a leading provider of AI and embedded processor IP, software and tools. We sat down with Sameer Wasson, CEO of MIPS, to discuss how the combined strengths of MIPS’ architecture, IP and design with GF’s differentiated process technologies gives customers a differentiated path to Physical AI: deterministic, safety-capable compute running at the edge, unlocking the next wave of real machines that sense, think, act and communicate in real time – and how GF’s purpose-built platforms enable it at scale. Why was bringing MIPS into GF the right strategic move to help address the Physical AI market? AI is at an inflection point and customers need real-time, deterministic intelligence that interacts with the physical world safely and reliably – what we call Physical AI. MIPS brings a 40-year heritage of efficient, scalable compute for performance critical systems, now centered on RISC-V application processors and real-time subsystems designed for low latency, functional safety and power efficiency. Pairing that with GF’s differentiated process technologies and global manufacturing gives customers something unique: a platform that spans IP, custom silicon and volume production so they can get to working silicon faster and tailor it to real-world edge AI. From powering the Nintendo 64 released in 1996, to Mobileye’s most advanced driver-assistance chips EyeQ6 with over 200 million ADAS SoCs shipped, and even serving as the foundation for leading cloud hyperscalers’ infrastructure, we have a track record to deliver workload focused performance at scale. GF’s decision to acquire MIPS last year was about coming together to create a more flexible, differentiated offering for customers by pairing GF’s leading process technology and manufacturing scale with MIPS’ processor IP and software enablement. That was the rationale that sat at the core of the decision, with a shared goal to help our customers get to working silicon faster and tailor that silicon to real-world edge AI needs. The timing couldn’t align better with the surge in AI demand across transportation, communications & datacenter infrastructure, robotics and intelligent edge markets. How does intelligence show up in each stage of Physical AI workloads for customers? Physical AI takes the capabilities of AI models from the data center and deploys it at the edge. The foundation of Physical AI is what I like to call the S.T.A.C., the closed-loop workload that enables platforms to Sense, Think, Act, Communicate and empowers edge platforms to be intelligent, without sacrificing latency, safety, privacy or efficiency. Each stage has distinct compute and system requirements, and the most successful platforms co-optimize them rather than over-build any single part. For Sense, the system collects real-time data from sensors like cameras, LiDAR, radar and analog inputs, to understand its surroundings. It must efficiently fuse and prioritize these different data types while staying within tight power limits. In the Think stage, the system quickly interprets sensor data and makes decisions using on-device AI and control algorithms. This requires high-performance, low latency compute that delivers deterministic results within strict power budgets—especially for robots and vehicles operating at the edge. In the Act stage, the system converts decisions into physical movements by controlling motors, actuators, brakes or robotic limbs. This stage demands ultra-low latency and highly reliable responses so actions like braking or obstacle avoidance happen within milliseconds. The Communicate stage is where the system shares information internally and externally—between subsystems, other devices, the cloud or even humans. This requires secure, low latency connectivity and support for multiple communication standards (such as Bluetooth® LE or 5G) without introducing delays. In essence, there’s intelligence of different kinds in every stage, from algorithms to better enact precise movement, to advanced multi-modal perception processing. Each customer’s workload will be a little different inside the S.T.A.C. loop, making flexibility key to successfully enabling Physical AI at the edge—without breaking power, latency or safety budgets. Where does MIPS differentiate across the S.T.A.C. loop? MIPS differs in two key areas; one is our software-first co-design approach. We start by profiling the customers’ workload, running their stack on our virtual platforms and core simulators to expose bottlenecks early. Then we shape the silicon around that from custom instructions to memory subsystem tweaks, so the shipped SoC meets real-world KPIs on day one. Our Atlas Explorer virtual platform is a good example of this “shift-left” approach. The second is our deep, workload-specific hardware optimization on open RISC-V. Because our IP is modular, we can tailor cores and subsystems. MIPS has spent years pioneering multi-threading and functional safety capabilities in our processor IP to deliver event-driven, deterministic real-time performance and functional safety. At the end of the day, we enable our customers to run their workloads on our core models to get insights into platform design ahead of silicon. This helps our customers align their workload and IP selection for the right stages of the workload they are trying to address. The open and modular nature of RISC-V enables us to deliver targeted workload enhancements at the hardware level, down to the core, unlocking deep levels of efficiency and performance. You’ve talked about MIPS’ software-first approach. What does that look like in practice? It means we start by understanding and profiling the customer’s software workload before finalizing the hardware. By first understanding the software workload, we can deliver insights into optimizations to help with software/hardware co-design. By doing this, we can identify bottlenecks or specific functions that consume a lot of cycles and then optimize our IP to handle those efficiently. For example, if an autonomous drone’s navigation software is taxing the CPU, we might introduce a custom instruction or tweak the memory subsystem to accelerate it. This co-design process creates a tight feedback loop between software and hardware. As the demand for high-performance, domain specific compute accelerates, the ability to analyze and optimize interactions between workloads and customizable compute platforms becomes a true competitive advantage. A software-first approach closes the gap between hardware and software teams, enabling smarter architectural decisions and establishing a scalable, low risk workflow for building Physical AI platforms aligned with real-world performance targets. In other words, by the time chips are fabricated, customer software is already running smoothly on the silicon. This engagement model ultimately accelerates time to market because we’ve done all the tuning upfront. It also reduces risk in production cycles and empowers deeper partnerships—we work hand-in-hand with customers, which means we’re not just a vendor, we’re a collaborator in their product development. Ultimately, a software-first mindset leads to a scalable, low-risk workflow for building Physical AI platforms: you get the performance you need with fewer surprises, and you hit your performance targets much more reliably. How would you describe the value proposition from the combined GF + MIPS portfolio? It’s platform leverage. The requirements of new physical AI products converge around the GF and MIPS portfolio; a need for ultra-low power platforms that operate reliably in harsh conditions, operate safety and securely and are imbued with intelligence; and delivered with supply chain resilience. Together, we address the critical needs of next-generation Physical AI products by bridging advanced silicon technology with intelligent processor design. GF’s ultra-low power technologies, including its FDX and FinFET platforms, enables full system integration and reduces power leakage with adaptive body-biasing, to stay within the tight power envelopes of Physical AI applications require. GF’s embedded memory, RF integration and advanced packing also empower us to build dense, efficient SoCs that Physical AI depends on for real-time responsiveness and power efficiency in deployed systems. With this synergy of our portfolios, we can now help customers: Achieve the lowest power and highest integration in the industry Meet the latency, power and cost constraints for edge devices Accelerate time to market and drive their supply chain resilience As the CEO of MIPS, what about GF drew you in to joining forces with their team? For me, it’s always been about how we can unlock the most value for our customers. When I considered what we could achieve together in the face of the rapidly emerging Physical AI market, it was a no-brainer. With the acquisition, we’ve created a more complete customer engagement model that allows us to support customers at multiple levels including IP, custom silicon and software. Very few companies can bring that full stack to the table. If a standard, off-the-shelf chip doesn’t cut it for a customer’s needs, now we’ll build one that does and manufacture it for them. That level of partnership is what drew me in. Beyond building leading technology platforms, GF’s resilient manufacturing footprint spanning the globe means we can scale our efforts and position ourselves to lead in high-growth areas like autonomous driving, smart devices and industrial automation. This geographic reach and manufacturing excellence stood out to me as a key benefit for our customers. Final question for you, Sameer. What are you most looking forward to as we see Physical AI evolve? Honestly, what excites me most is seeing our technology come to life in the real world. The humanoid robots, the next wave of autonomous driving features—those are no longer science fiction. Autonomous vehicle features that are currently in production have already surpassed what many once thought was possible at this point in time. Plus, given the speed at which we’re seeing Physical AI evolve, I think we’re about to see entirely new applications emerge over the next few years that aren’t even on the radar yet. For example, we might soon have robots in hospitals performing routine procedures, or agile delivery drones navigating complex environments seamlessly. I’m especially excited for those “firsts” – the first time someone’s life is saved by an AI-driven vehicle’s split-second decision, or the first household robot that truly understands and interacts with its environment in a human-like way. Those will be milestone moments. I look forward to the day when devices powered by our full-stack solutions are out there in the world making a difference – whether it’s in a car avoiding an accident or a robot in a warehouse making operations safer and more efficient. Seeing our work enable new levels of autonomy and intelligence in everyday machines – that’s the reward. And given how fast this field is moving, I suspect we won’t have to wait long to witness some amazing breakthroughs. Sameer Wasson is the chief executive officer of MIPS, leading the company’s mission to drive intelligence into action for next-generation autonomous machines. He previously led Texas Instruments’ embedded microprocessor and microcontroller business, advancing TI’s position in high-growth automotive and industrial markets—including embedded AI, software-defined vehicles and electrification. Earlier at TI, Wasson helped build the company’s mmWave radar business for automotive and industrial applications and held leadership roles in communications infrastructure processors.