Meta, the parent company of Facebook, Instagram, and WhatsApp, is testing its first in-house AI training chip, marking a significant step in its push to develop custom silicon and reduce reliance on external suppliers like Nvidia. According to sources who spoke with Reuters, Meta has begun a small-scale deployment of the chip, with plans to expand production if testing proves successful.
This initiative aligns with Meta’s long-term goal of cutting massive infrastructure costs as it invests heavily in AI-powered tools to fuel future growth. The company has projected 2025 expenses between $114 billion and $119 billion, with AI infrastructure expected to account for up to $65 billion in capital expenditures.
One source revealed that Meta designed its new training chip as a dedicated accelerator, optimizing it for AI workloads and improving power efficiency compared to traditional graphics processing units (GPUs). Taiwan-based TSMC, a key player in global semiconductor production, is manufacturing the chip.
Meta deployed the test phase after it completed its first “tape-out”, a crucial step in chip development where engineers send an initial design to a factory for fabrication. This process typically costs tens of millions of dollars and requires three to six months to complete. If the test fails, Meta must diagnose issues, redesign the chip, and repeat the tape-out phase, increasing both time and cost.
Meta and TSMC have declined to comment on the project.
Meta’s MTIA Chip Series: Learning from Past Failures
The training chip is the latest development in Meta’s Training and Inference Accelerator (MTIA) series. The program has faced multiple setbacks, including the cancellation of an earlier chip at a similar stage. However, in 2023, Meta successfully deployed an MTIA inference chip for AI-based recommendation systems that power content selection on Facebook and Instagram.
Meta executives have outlined plans to integrate in-house AI training chips by 2026, initially focusing on recommendation systems before expanding into generative AI applications, such as its Meta AI chatbot.
Meta previously abandoned an internal inference chip after it underperformed in a small-scale test, leading the company to spend billions on Nvidia GPUs in 2022. Since then, Meta has remained one of Nvidia’s largest customers, using GPUs to power AI models, including its Llama foundation series and advertising systems.
Despite Meta’s push for custom silicon, doubts are emerging over whether simply scaling up large language models by adding more data and computational power will continue yielding breakthroughs. The January 2025 launch of DeepSeek’s low-cost AI models, which prioritize computational efficiency through inference, has intensified discussions around alternative AI scaling strategies.
As AI investment volatility continues, Nvidia shares have fluctuated. After initially losing up to 20% of their value during the DeepSeek-triggered AI stock rout, they rebounded before dipping again over broader trade concerns.
Meta’s latest AI chip test signals its commitment to building a self-sufficient AI infrastructure while continuing to rely on Nvidia’s industry-leading GPUs. If successful, the new MTIA training chip could help Meta optimize costs and enhance AI performance across its platforms.
As competition in the AI hardware space intensifies, Meta’s long-term bet on custom silicon will play a critical role in shaping the future of AI-driven social media and digital ecosystems.