AMD's Most Misunderstood Technology.
Update
This is an update of my original AMD deep dive.
AMD’s FPGA technology is starting to rear its head. And this tech is far more consequential than the market believes.
I’ve been saying for years that AMD’s FPGA technology was going to enable AMD to take a dominant share of the AI at the Edge market: for example, check this post out. This week, we saw IBM announce that they were able to run a quantum error-correcting algorithm on an AMD FPGA 10x faster than required. This stands a tremendous validation of the FPGA thesis, because it shows that AMD’s FPGA’s have the ability to perform at latency levels far above what present AI workloads require. Also:
By acquiring Xilinx in 2020, AMD made it near impossible for anyone to catch up with this tech. Xilinx was the undisputed FPGA leader at the time, only marginally seconded by Altera, which Intel acquired and buried in 2018.
The only way to productively deploy AI technology across the AI value chain is doing so via chiplets, which as I’ve said many times is the key enabler of AMD’s platform.
As AI moves out to the edge, the most important factor is latency. Customers will want to perform inferences quickly and cheaply and they won’t really care about anything else. FPGAs are so powerful because they can essentially take on the shape of the algorithm they are running, thus optimising at the circuit level. The IBM announcement, with the ensuing paper that is expected to be published this week, shows that FPGAs are capable of outperforming GPUs in algorithmic workloads with bleeding edge latency requirements. Here’s what IBM said about this last week:
Designing and implementing a way to do this at scale, and without requiring expensive GPU clusters, is a significant achievement to scaling useful quantum computers.
AI in general, and not just at the edge, will have a higher bar for latency requirements over time. Inference will be everywhere and the optimal way to infuse any compute engine with inference capabilities will be by adding an FPGA to it. FPGA’s will likely take some share from GPUs, but most I believe most of the upside will come from infusing the entire compute stake with inference capabilities at a marginal cost. Indeed, the entire stack will be running inferences (both to optimise itself and run inferences for third party customers) leveraging an ever growing variety and complexity of neural networks. Non-morphable chips (those that don’t have the ability to change on the go) will not be able to compete.
As I’ve pointed out in the past, following the Xilinx acquisition, what made me bullish on AMD’s technology roadmap was the fact that client division has been propelled over the past year by AMD’s AI PCs. In turn, the latter are essentially powered by a CPU combined with an FPGA. FPGAs infuse AMD AI PCs with inference capabilities and I understood that soon every single compute engine on Earth would need this capability to stay competitive. For example, I believe we will soon see AI GPUs use FPGAs to perform inferences onto themselves and optimise how they run workloads.
Until next time!
⚡ If you enjoyed the post, please feel free to share with friends, drop a like and leave me a comment.
You can also reach me at:
Twitter: @alc2022
LinkedIn: antoniolinaresc
Disclosure
These are opinions only of the individual author. The contents of this piece do not contain investment advice and the information provided is for educational purposes only and no discussions constitute an offer to sell or the solicitation of an offer to buy any securities of any company. All content is purely subjective and you should do your own due diligence.
Antonio Linares makes no representation, warranty or undertaking, express or implied, as to the accuracy, reliability, completeness or reasonableness of the information contained in the piece. Any assumptions, opinions and estimates expressed in the piece constitute judgments of the author as of the date thereof and are subject to change without notice. Any projections contained in the Information are based on a number of assumptions as to market conditions and there can be no guarantee that any projected outcomes will be achieved. Antonio Linares does not accept any liability for any direct, consequential or other loss arising from reliance on the contents of this presentation. Antonio Linares is not acting as your financial, legal, accounting, tax or other adviser or in any fiduciary capacity.,
⚡ If you enjoyed the post, please feel free to share with friends, drop a like and leave me a comment.
You can also reach me at:
Twitter: @alc2022
LinkedIn: antoniolinaresc


The IBM quantum announcement is a great validation point for the FPGA thesis. What's particularly intresting is how AMD's vertical integration with Xilinx is creating moat-like advantages in edge AI deployment. As inference moves closer to the data source, the ability to optimize at the silicon level becomes increasingly critical. FPGAs offer that flexibility without sacrificing performance.
+1