.
Mark Oliver:
You always end up with the perfect combination of logic to routing for your design. So it's always a hundred percent efficient. You do away with all that power hungry routing up in the metal layers, and you end up with a chip that's smaller, lower power, denser. And in doing so is cheaper and cheap enough to be able to go from a prototyping area in the lab, straight into high-volume manufacturer.
Zach Peterson:
Hello everyone and welcome to the Altium OnTrack Podcast. I'm your host Zach Peterson, and today we'll be talking with Mark Oliver, VP of marketing at Efinix. For those of you familiar with semiconductor companies, you may not be familiar with Efinix. They're a smaller company, but they have some very innovative solutions targeting some advanced electronics applications. Mark, thank you so much for joining us.
Mark Oliver:
Thank you, Zach. Always a pleasure.
Zach Peterson:
Yeah, I was interested to talk to you because, well, a couple reasons. First of all, we don't often have folks from the semiconductor industry here to talk about electronics, and I wish we had more of you guys. But also we've known each other for a little bit, and looking around on your company website, I see that you guys are targeting some cutting-edge areas pretty hard with your products. So I think just to get started, maybe you could tell us a bit about Efinix and the history of Efinix and then what makes your company and your products different.
Mark Oliver:
Sure. So we're an FPGA company. FPGA is... What is it, about a 30-year-old technology at this point. The architectures have evolved, but they're essentially the same as they were 30 years ago. We looked at that about a decade ago and we said, look, "Stop. Time-out. There's a better way of doing this. There's a more efficient way of reinventing the FPGAs from the ground up." So we set around doing that. 10 years later, we have some of the most efficient, low power, densest FPGAs on the market, and that's having a really profound effect on the way FPGAs are adopted in the marketplace and the way people are looking at designing systems for a very broad range of markets right now.
Zach Peterson:
So it sounds like you have been on the driving edge, or the cutting edge, of evolving FPGAs into something new beyond just their uses. Either a prototyping tool or a substitute for a main processor when there isn't a specialized chip available, or maybe when just a typical CPU just won't work.
Mark Oliver:
Yeah, if you look at the way people have used FPGAs, they are a miracle product in many ways. They're a blank piece of silicon and you can put your circuit right onto the FPGA and you end up with a piece of custom silicon. That has always been fairly inefficient because of the inefficiencies of the architecture of an FPGA. A basic FPGA has lots of logic elements in it, and lots of interconnect and routing between them. The inefficiencies come from the fact that you end up with multiple, multiple, multiple layers of metal routing to link up all those logic elements. You have to decide at the design time for the FPGA, the ratio between those logic elements and the routing. And you can guarantee as soon as you fix that ratio, it's wrong.
It's wrong for most of the designs, and it might even be wrong in multiple regions across a single design. There's going to be areas that need more logic, there's going to be areas that need more routing, and you are always going to be inefficient in some respect there. So we said, "Well, to heck with that, let's throw that out. Let's invent a new logic cell that can be used either as logic or as routing." That way you can configure the logic cell at the time that you instantiate your circuit onto the FPGA, and by definition you always end up with the perfect combination of logic to routing for your design. So it's always a hundred percent efficient. You do away with all that power-hungry routing up in the metal layers and you end up with a chip that's smaller, lower power, denser. And in doing so is cheaper and cheap enough to be able to go from a prototyping area in the lab straight into high-volume manufacturer. So that changes the way people take systems to market.
Zach Peterson:
In this kind of evolution of FPGAs to these new form factors that you guys have been driving, what type of applications are you seeing that are making greater use of FPGAs? And do you see a time where companies get so big that they start to look at their FPGA usage and say, "Why don't we take this and convert this to custom SOC?" Are FPGAs maybe getting pigeonholed a bit even though they enable certain applications that might be tough with other chip sets?
Mark Oliver:
If you look at what's going on in the market right now, for decades, we've had Moore's Law that's been working in our favor. So every 18 months or so, we double the number of transistors that can cost effectively be put on silicon. That is no longer really the case. Moore's Law is slowing down. If you look at the cost of putting together a custom piece of silicon in 16, seven, five nanometer, it's going up exponentially. So the applications that can actually justify putting together a piece of custom silicone are going down, the volumes required to amortize that expense are going up. So the options available for people to create custom silicone for applications out in the field are dwindling, frankly. So if you contrast that with what's going on in the marketplace right now, the applications tend to swing like a pendulum between data centers and edge.
So you go from mainframes to PCs, PCs back to data center, data center, right back to edge. And we're at one of those inflections right now where people are driving applications back out of the data center towards the edge where all that data is being created, all of the data still has local context, and where you can operate on that data in real time with low latency to extract value from it. In order to do that, you also more or less need AI, and AI massively increases the amount of compute that you need in those edge applications.
So with that in mind, you don't have the ability to produce custom silicon anymore. It's too expensive. The diversity of data being produced at the age means that ASSPs get less and less relevant. So what you need is a cost-effective, low power, small form factor way of putting custom silicone out there at the edge. And that is increasingly Efinix FPGAs. They have the revolutionary density and low power, and efficiency that you need to actually drive those custom silicone solutions into an FPGA and put them out at the edge.
Zach Peterson:
I think there's a few things to unpack there, but you mentioned something earlier about developing custom silicone. And you mentioned technology nodes like I think 16 or 14 nanometer, or seven, or even five, really getting to high density. But are people still developing for maybe an older technology node? Is that possible? Or do you have an issue with, now the density is so low that in order to do that advanced application, just the physical size of the chip becomes unacceptable?
Mark Oliver:
Let me answer that. In your previous question, you said, "Did we see a case where FPGA would be absorbed into heterogeneous processing nodes into asic?" And let me answer the second question by saying, "Well, no we don't. And yes, we do see people designing in larger process nodes." When people have approached us and said, "Can we integrate your FPGA technology into our asic?" And we have done that on a couple of occasions where it made sense. But for the vast majority of cases, it really doesn't make sense that the people are still having to design in those process nodes to get the performance and the power and the compute density they need. They're designing in seven nanometer and then they're trying to integrate FPGA technology into those asics, which is hard. So you're still ending up with a high-cost, long time to market, high risk development environment.
What we're seeing more often these days is a SIP kind of solution, or even, I hate to say it, chiplets, because that's kind of a misnomer. It's not well-understood, but in a SIP kind of implementation, you're able to take an FPGA technology which is instantiated in the perfect node for it, and you're able to mix that with high compute, perhaps at a lower process node, or with high voltage IO analog circuitry in a larger process node. And that makes best use of the efficiencies of all of those process nodes, and it means that you end up with the most cost-effective and lowest risk time to market.
Zach Peterson:
Okay, that's interesting. It sounds like we've back to another case where the packaging is really the big enabler of the advanced application. In this instance where you're integrating an FPGA block in with some of this more traditional circuitry, whether it's compute, let's say it's RF, let's say it's a specialized analog block or signal processing. Whatever it's going to be, this is another case where the system design is gone from the board and the enclosure down to a lower level where you're now looking at the packaging, and just interconnecting the chips at this such a small scale.
Mark Oliver:
Yeah, that's true. If you look at the trend, as we said, things are moving away from the data center towards the edge and they're traversing that fog on the cloud right out to the extreme agents to IOT. Out there you end up with small footprints, small constraint space applications with low power and certainly no environmental conditioning. So you end up having to do large amounts of integration into things like SIP packages to get the footprint small enough, lower power enough, and high speed enough to make those edge applications make sense at the extreme edge.
Zach Peterson:
Maybe it sounds like, in general, the form factor that people are demanding from an FPGA is getting smaller. Because I think if you look at the product options that are available, you go onto a search engine or you go onto Octopart, let's say, and you type in "FPGA", you see these big, huge, ridiculous chips with a couple thousand IOs on them, or something like this. And it's like, "Why do I need that? This chip costs hundreds, or possibly more, dollars." It'll exceed your product costs, just way overshoot it. It sounds to me like the form factor that people really need, even if they don't know they need it, is actually going down, not up.
Mark Oliver:
Yeah, there's probably three areas that people have traditionally used FPGA, right? That there's the extreme low end where an FPGA is just a wonderful piece of glue logic, and so all power to people that want to do that, that's not going anywhere. Recently people have been taking FPGAs and putting them in the data center so that you can do things like AI acceleration up there. In a data center you have unlimited space, you have unlimited calling, and unlimited power, and you just need to apply brute force compute to the workload that they're giving you. So you're right, people are putting together huge great dinner plate size FPGAs that have big power consumption and are correspondingly more expensive. With that move away from the data center and out to the edge where 5G is enabling new applications with low latency and data is king, you don't have those luxuries of unlimited budgets, unlimited power, and unlimited cooling.
So the mid-range FPGA is really starting to come into its own where you're looking for the densest compute that you can get, the highest level of acceleration that you can get, but at a price point and at a form factor and a power consumption level, that means that you can use it in high volume in consumer and edge applications. So yeah, the mainstream FPGA market is set to explode because that's where the growth is in the market right now, and that's the area of the market that isn't being served by the technology with Moore's Law running out.
Zach Peterson:
You mentioned the costs involved in producing custom silicone, and I've noticed that there's kind of been this trend recently where the semiconductors are kind of piling into these specialty areas like power PMICs. I saw a lot of new PMICs released through 2019, 2020, and onward. And then it seems like recently it's been SOCs targeting specific applications. So they'll include some certain interfaces, or they'll include the newest version of Wi-Fi or something like this. They're really just stuck on these huge volume markets where they know they can sell a million plus chips, what not even a year, a month, right? Maybe a week if they could really get it to that level.
Mark Oliver:
Right.
Zach Peterson:
And so the FPGA just seems to be, again, coming into its own as this mid-volume kind of solution. But that can still get you to the point where you can scale up to the point where maybe you are selling a hundred thousand units a month of your end product. Is that a fair assessment?
Mark Oliver:
Yeah, it is absolutely right. The only place that you're seeing custom silicone solutions being produced right now are in applications that can command those kind of volumes just because of the need to amortize the cost of the production. So you're right, there are areas like the PMICs that you mentioned, some video processing with generic image signal processing techniques, Wi-Fi, USB, stuff that actually can command the kind of volumes that you need. The good thing about the FPGA is that being a blank piece of silicon, you're essentially looking at a silicone dye that can be sold to a broad swath of applications. And in doing that, you end up amortizing the cost of the FPGA development over a broad set of customers and a broad set of markets. So it becomes a very attractive silicone proposition where you've got a single die that can address a thousand customers in a 2,00, 3,000 different applications and amortize the cost of the production as a result.
Zach Peterson:
You mentioned the cost of production, and I agree with you on that, but I think it's also the market area that's really broadened out because it seems to me that it's not just like a PMIC that the big semis are developing, but it's like a PMIC specifically for EVs, and then they'll just push it to death. And it's so targeted nowadays, whereas I feel like in the past, they would market by function instead of by whatever the trendy tech application area is of the time.
Mark Oliver:
Yeah, that's perhaps true. But the trend in there, the marketplace there is pushing against that, which is kind of the dynamics that you see going on in the market, that you have to come up with an application that has a huge volume to amortize it. But in order to produce that silicon, you have to come up with a specification that is generic enough and yet specific enough for the application. And if you look at what's happening in automotive, for example, if you look at your average car right now, it's got five or six cameras at least, optical, around it. It's got some lidar, it's got radar, it's got ultrasonics it's got, you name it.
All of that data has its own custom characteristics and you're going to need something that's fairly specific for handling each of those sensors. So you end up with a very diverse set of applications that start dividing up the potential volume. So despite your best efforts to create custom silicone for these high volume automotive applications, you end up with subcategories of automotive that are very diverse. And you end up, again, struggling to create a business model that makes sense to create custom silicone for those applications.
Zach Peterson:
The economics of semiconductors really work against you at some point.
Mark Oliver:
You're right there.
Zach Peterson:
Given what you said about the adaptability of FPGAs, I think it would be natural for someone to wonder, "Why isn't everything just built using an FPGA?"
Mark Oliver:
It's a good question and it's something that we are trying to solve, believe me. I think one of the issues is that there is a perceived barrier to entry. If you look at the design techniques that people are using these days, I mean, Arm has done a fantastic job of creating software engineers for hardware tasks, putting together a controller in a package with some predefined accelerators, and unleashing the software engineering community on it probably outnumber the hardware community probably a hundred to one.
Zach Peterson:
I would totally agree with that.
Mark Oliver:
Yeah. So you end up with people accelerating their time to market, taking as a standard as possible of a product, and writing applications in software and accelerating the bare minimum where they have to in hardware. So since that's the way the world looks right now, and since the world is dominated by software engineers, we at Efinix are actually attempting to do the same thing.
So instead of insisting that all of our customers take classes in VHDL or Verilog and learn how to create custom hardware silicon solutions. We're saying, "Let's integrate a RISC-V processor." It's an open standard and it's highly configurable and we can talk about that if you want. 'Cause I think that's one of the critical things about RISC-V. But once people have an instantiated RISC-V in their FPGA, we're supplying them with platforms, with templates, with entire designs based for vertical markets, with small areas in those designs that say your FPGA content here, your IP here. So people can differentiate their product, accelerate their product, with the bare minimum of VHDL design and leverage an entire ecosystem that we've put together on that same day. So they end up with a very low NRE, fast time to market in a flexible platform without having to go back to class and study VHDL for the next couple of years.
Zach Peterson:
So you brought up something important here in this discussion about VHDL. I will admit, as a student doing physics, I at least had to do some programming. We had to do Java and C, and eventually I taught myself Python. We never even knew what VHDL was. That's as a physicist, and then later when I was teaching engineers, I still would've had no idea. And I think for a lot of engineers, because it's seen as this other thing that's relatively inaccessible, people gravitate towards working with MCUs because they probably know C or enough Python to get into MicroPython and then they can use one of those platforms to program their system. So it sounds to me like you've really taken a community-first approach, which we're fans of here at Altium, and focused on providing those resources that make it easy for people to adopt your products. Is that kind of your strategy here around growth?
Mark Oliver:
That's exactly right. Zach, I'm a double E by nature. So when I was at school, I did the same thing. I learned some basic to start with and then a whole bunch of C and C++. I'm not a VHDL programmer, so you're absolutely right. We've taken an approach that says, "What do our customers want to design in? Where are their specialties? Where are their design experience? Where does that lie? Let's put together a system that enables them to embrace the technology, embrace FPGA to be able to put together designs rapidly, easily, accurately first time. And so how do we provide them with design tools and ecosystem templates, what a whole bunch of stuff that we've put on our GitHub as open source so people can go there, work through our examples, see how we've done things. And how do we keep things for the maximum possible in software rather than making people take a hardware approach?"
So people have embraced that, I think with the RISC-V that we've put together, which is a very point-and-click kind of configuration. So it configures itself from a hardware perspective and we then give that RISC-V and the development environment to the software engineers so that they can develop on the FPGA platform efficiently.
Zach Peterson:
So you've mentioned RISC-V a couple of times and I know what RISC-V is, but some folks in the audience might not be familiar with RISC-V and why it is so important for processor design, and specifically for FPGAs. Because there are more vendors that are supporting RISC-V cores and that are providing those development libraries that folks need through their IDEs to deploy RISC-V core directly to an FGPA. Maybe unpack what RISC-V is for folks and how it enables some of the applications that you've referenced.
Mark Oliver:
Sure. So there's two parts to that. I can cover what RISC-V is and then why particularly it's a match made in heaven for FPGAs. So RISC-V, or if you go back a few years, the embedded processor market has been dominated by Arm obviously. So if you want to put an embedded processor together, you go to Arm and you say, "Can I license one of your cores?" You pay them a crazy amount of money and they will give you the license to be able to put one of their cores in your embedded processing design. RISC-V started as an open...
Zach Peterson:
Real quick. When you say, "Put one of the embedded cores," you mean, physically in the actual silicon as well as in the logic that you're going to implement in the silicon?
Mark Oliver:
Yeah, so a hard core inside your silicone on the custom chip that you are producing that you can drop in an Arm core, you can put all your peripherals around it and then use the Arm core for embedded compute. And there's a whole ecosystem, a fantastic ecosystem, that's built up around that exact business model. So that's the way it's been for the longest time. But it is a closed architecture with very few examples, and unless you have very deep pockets, Arm are not going to let you get in there and start messing around with their architecture. So you end up with a fixed instruction set and a fixed architecture that you then code to. By contrast, RISC-V started as a project in the University of Berkeley, I believe, and it was an open-source attempt to create a reduced instruction set computing architecture with an open instruction set.
Five iterations of it later we have the RISC-V, which is caught on. So you can now get a RISC-V processor for free off any number of websites and any number of open-source communities that have instantiations of RISC-V out there. You can pick that up, put it into your custom silicone, or in our case, insight to an FPGA and you have yourself an embedded processor. A very capable embedded processor, and one that is scalable because it's open-source so you can scale its compute resources up and down to meet the requirements of your application.
Zach Peterson:
When you say an "instruction", I think that this can be a bit abstract for some folks, but an instruction would mean like a logical instruction or some manipulation of input data required to do all of the typical computing tasks that a processor would normally do, or that we would expect it to do. Is that the correct way to think of it?
Mark Oliver:
Yeah. You normally think of coding to a processor in a language like C, C++, something like that, Python. The compiler that you use to take that C to a machine language actually takes the program that you write and it converts it into a series of very low-level instructions for the processor. Fundamentally, you think of it as being two registers in, so you would load two registers with data out of memory, you do an operation on it, and add, subtract, multiply, whatever, the answer goes back into a third register and you then store that back into memory. So the entire program gets transformed into those low-level operations. And for most processes, that suite, that library of operations is defined up-front and you get what you get. With RISC-V, the library of operations is extensible, so you can dial it up or down depending on how complex you want your processor to be.
But more importantly, and critically for us as an FPGA vendor, there are a whole bunch of those instructions that are undefined. So they have an instruction code that you can call that the program would normally call, but instead of making it an "and", or an "or", or a "multiply", or whatever, it's your custom instruction. So you can now define what you want that processor to do based on your application and give it an instruction number. So for example, if you had an AI algorithm, AI is massively complicated, that's one of the problems of putting compute out at the edge, but they tend to be dominated by big mathematical matrix multiplies called things like convolutions. So with a RISC-V instead of an add/multiply kind of thing, you could say, "Let's get these two registers, let's do a big complex convolution on them and store the result in that third register."
So you have now taught your RISC-V processor to be an AI-centric processor because it now knows how to do convolutions. So putting a RISC-V in an FPGA is just a fantastic marriage of technologies. You have an FPGA that has the RISC-V processor in it, and you have a big old blank scratch pad of circuitry surrounding that processor into which you can put custom instructions, you can put accelerators, you can link those custom instructions in with the processor. So the processor now believes it's a processor that does your bidding and that makes it a very powerful processor for your particular application. And we see many X improvement in performance just by a couple of simple instructions that we can put into the FPGA.
Zach Peterson:
So I think that really gets down to the advantage for a product development team. They want to develop a product and they have some data that they're bringing in. They want to run an AI model, or put it into an AI model, I should say. Using an FPGA, they can define that mathematical operation as a logic circuit on the FPGA, just using the development tools and then using RISC-V as the processor core with a custom instruction. It literally exists as a logic circuit, whereas if you did it in software, what would be going on? What's the issue with doing it in software?
Mark Oliver:
If you're doing it in software, you are using the constraints of the instructions that you've been given with RISC-V. So you are stepping through doing load register one, load register two, do an ad, store it back into memory, doing that again. And for some applications that might be exactly what you want to do, but not for many. So you end up doing a big long string of sequential operations to get even the most basic tasks satisfied. If you can create a custom arithmetic unit in the FPGA that you can now load register one, load register two, and have it do something really complex at hardware speed and a couple of cycles in that arithmetic logic unit, then give you the result, you've created a custom processor that as far as anybody is concerned, is still running your C program, but you're just substituting in a couple of custom instructions instead of having to use long strings of the instructions that you're given as standard.
We've actually taken that one step further. For our RISC-V processor, we have what we're calling our Tiny ML platform. Which is free by the way, it's on the GitHub, go check it out. But that will take the TensorFlow light models, which have been quantized and somewhat optimized for running our microcontrollers at the edge. But the TensorFlow light models have a hundred or so primitives in there for things like convolutions, things like max pooling, the kind of stuff that you find inside artificial intelligence models. They run on a microcontroller but they run on a microcontroller with difficulty. We've actually created a library of custom instructions that implement those primitives and we've bundled those all up behind a gooey. So you can take your AI model, you can profile it, you can figure out which of those primitives are being used in your model.
You can do a drag and click with a mouse in our gooey to say, "Create custom instructions for these particular primitives that we're using." The tool will compile those custom instructions into hardware without you having to know anything about VHDL, obviously. We'll include them in your FPGA project, and we'll give you the ability to call those from your TensorFlow light model running in C and create hardware acceleration for that model. So we see if you compare a program running in C to a program running with just a few custom instructions in hardware, we see about a 200 to 300 times performance improvement of that model running.
Zach Peterson:
By performance, what's the metric here? Is it the number of logic operations required to execute a command or is it power consumption? Is it just the latency? Well, how do you define performance?
Mark Oliver:
It depends on the model that you're running, but if we're running things like video processing, doing AI on video, so people detection, object detection, object classification, it's easier just to count the number of frames per second that you're able to get through when you're processing that video. So that just gives you a frames per second performance improvement.
Zach Peterson:
I see. So really the word "performance", if we're going to use air quotes, depends on the end application and it could be something unique.
Mark Oliver:
It does. And in the same FPGA, you might also have all of your video pre-processing, your image signal processing. You might be doing a whole bunch of other application-level stuff in Linux and then you might be driving an HDMI display or an ethernet output port. So the portion that you accelerate will change on the actual acceleration of the final application depending on the amount of time you spend doing it. So when we say the 200 X improvement on the model, we're talking frames per second comparison on the part that we're accelerating.
Zach Peterson:
Sure, that makes sense. That makes sense. And I see different performance metrics being used even just with evaluating the effectiveness of a neural network. So I agree with the idea that you can't just generalize performance down to time or power. I think there's a propensity to do that when you're doing, dealing with sequential computing sometimes.
Mark Oliver:
Yeah. You mentioned power and that's an interesting thing here. So if you were to take, for example, your model running in C and it's barely fast enough for what you want to do, you can put some of our custom instructions in, again, click and drag with a gooey, you can accelerate your model 200 times by including these hardware custom instructions. So you now have a model that's running possibly 200 times faster than it actually needs to run. 'Cause it was almost quick enough running in software.
So you can reduce your clock rate by a factor of 200, get your performance back to where it was before you started accelerating it. Except when you reduce your clock rate, the power goes through the floor. So you can now trade off acceleration of the RISC-V against system power. So if you've got a thermal imaging camera, for example, where power is absolutely critical, you can't be putting heat into that enclosure and heating up the sensor because bad things happen. So you can now accelerate your RISC-V processor in C, in software, inside the FPGA, figure out that you've got way more performance than you need, turn the clock down and just crush the power and put the FPGA into that thermal sensor with no forced air calling and no thermal problems.
Zach Peterson:
You've identified two points, the latency and then the power consumption, which have historically been the main barriers to bringing inference onto something like an Arduino.
Mark Oliver:
Yep, yep.
Zach Peterson:
And so it seems to me the solution is, even if you were going to do something on an Arduino, the typical AI accelerator solutions that you would implement in software at the model level or at the data pre-processing level, those get you part of the way there. But at some point you're going to have to reconsider the fundamental structure of your silicon and that's when you might move to an FPGA, and then eventually as your product scales, you'll need to then take that FPGA off the board and then go with custom SOC just because of the economics. Is that a fair assessment for the product roadmap that someone might implement?
Mark Oliver:
Yeah, it is. And you'd probably find that the same kinds of models that you were attempting to run on your Arduino would also run on a RISC-V inside an FPGA but then could be accelerated in hardware by the FPGA. So you get a ton more performance than you would ever get out of an age microcontroller, but you're still running on an edge processor and you are not doing massive amounts of data center class of AI. So then at some point, yes, perhaps you need to get into a custom piece of silicon that is many hundreds of dollars that does that one thing, and is probably two years out of date because that's when they taped it out. Or you end up shipping stuff up to the data center and doing it the old fashioned way with a big old several hundred millisecond round-trip to the data center.
Zach Peterson:
Yeah, absolutely. Given the advantages in certain areas that you have highlighted, have you seen products, or product design teams say, "You know what, we were going to go to custom silicon eventually, but we're just going to stick with FPGAs for this product?"
Mark Oliver:
I would say most of our customers are in that ballpark. I think a lot of them are... When I give customers the same pitch that I've just given you, you see a little light bulb go off and they suddenly realize that the last four or five vendors that they've talked to are no longer in the ballpark because that's not where design is going to be, that that's not the state of the art and that's not where they want to spend their 30 million dollars in the next five years on product development. So yeah, I think most people get this and when you show them the proof points of the FPGA and the efficiency and the power and the kinds of applications that they can install on the FPGA, people are like, "Got it, this is the future and I know what we're doing next."
Zach Peterson:
Well it'll be interesting to see these smaller, more targeted FPGAs make their way into certain products. Because I think still, if you look at the really common consumer stuff like your iPhone, your tablet, your smartwatch, typical stuff that people are buying for Christmas right now, or buying each other for the holidays, it still uses an SOC. And it just seems to me that these big vendors have such a corner on the market that it's difficult for everyone else to get in. And I almost feel like for the product development team or the startup that just has a great idea and they need to get their product out there, FPGAs are a good solution for them.
Mark Oliver:
The applications that you cite there, your iPhone and your smartwatch, bizarrely, those are the kind of applications where if you're shipping 500 million a year, then you go ahead knock yourself out and put some custom silicone together, because that makes most sense. If you are talking about a smart doorbell that will upload video doorbell data to the internet somewhere and save it for you, or a smart speaker that'll wake up when you say it's a word name, I'm not sure whether there's any around here, but those kinds of applications, they're not doing the 500 million a year, but they do need a first time to market. They need an edge-powered and edge capable class of processor. And they also need the ability that over time you want to be able to upgrade the capabilities, upgrade the AI algorithms, the people detection algorithms, that the pet detection algorithms of some of these edge devices. And with an FPGA you can do that even after you've deployed it that you can upgrade it at a hardware level, not just give it a new firmware upgrade.
Zach Peterson:
That was another thing I was going to ask about, was modifying the silicon. That seems to be such a huge advantage as well because, like you had just mentioned, if you're going to tape out custom silicone, by the time it hits market it's already obsolete, at least in terms of the technical capabilities. They may not market as being obsolete with distributors, but when you look at what's going on in the chip, it certainly is obsolete. So with the FPGA update process, ideally you could look out, let's say as a product development team, let's say three to five years, maybe you over design for gen one products in terms of the FPGA that you select, but this gives you the capability to build on top of that product and evolve it over time without having to do a new chip revision as well as a new board revision each time.
Mark Oliver:
Absolutely. There's certain areas of the market, particularly in things like vision processing and certainly in AI, where the field is advancing so fast that the worst thing you want to do at this point is to put together a 50 million dollar development plan for an AI chip just to find out that the models that are available when you bring the chip out aren't necessarily the state-of-the-art models, and not high enough performance for the application. So deploying that with an FPGA gives the ability to do that, catch up, revise the hardware and come out with a brand new product on the spur of the moment.
Zach Peterson:
I think that's why you continue to see the reliance on number one, data center, and then number two, doing it in software because the software is technically infinitely extensible within the limits of your computer. And then the data center's the data center, it's tried and true and all you got to do is, what, put a cellular module on your device or make sure it has Wi-Fi access and then it's throwing data to an API and getting the results back.
Mark Oliver:
Yep. No, that's exactly right. I mean that's the beauty of software. You have a fast time to market when you do one of those, "Doh," moments, you can fix it for free. But there's applications out there and there's going to be increasing your large numbers of them that that's not going to work for. So when you try and do really high-level AI at the edge, you are going to need to be able to upgrade the hardware acceleration of that compute element because it's going to get it exponentially more difficult.
If you have a processor that is in a self-driving car somewhere, then you no longer have the option of shipping that data back to a data center and waiting a half a second for the result to come back. So there's real-time applications that need exponentially higher compute with zero latency, and we're seeing that more and more as this pendulum swings back out of the data center and towards the edge. So I think the age of FPGA has come and I think with the increased efficiency and the lower costs and the lower power of Efinix technology, we are going to be at the forefront of pushing that FPGA adoption out into the mainstream markets and out to the agent, and far edge and beyond.
Zach Peterson:
You mentioned a half a millisecond or however much latency required to get results from a data center, let's say, through a wireless connection. But then you brought up automotive and if you really think about it, that fraction of a second can really be the difference between saving your own life or saving someone else's life.
Mark Oliver:
Yeah, it's many, many yards of breaking distance, assuming that you have a nice high speed link to a data center. And I've got to imagine there are large regions of rural America where that is not the case.
Zach Peterson:
Yeah, absolutely. You brought up, or you alluded to, heterogeneous integration earlier where folks are taking an FPGA, let's say, block processor block or a logic block and putting it on in the same package as some other traditional components, whatever that might be. Do you think that the big processor manufacturers are going to start doing that with their own FPGA products? Or do you expect some of the other semis that maybe don't do large CPUs, that aren't like the Intels or the AMDs of the world, to start working with FPGA companies like yourself to then bring those FPGA blocks into their own processors, whether it's added onto the ISIP or whether it's brought into a package?
Mark Oliver:
Yeah, I do, and I think we're at the leading edge of seeing that right now. I think a lot of the benefits that we are seeing as an FPGA company, we're starting to put together SIPs that contain things like flash and memory, just making self-contained video processing chips in a single package. Carrying that to the next level you then start putting custom IO on there and as we've said, Wi-Fi, USB, ethernet, whatever, are good candidates for that. So you end up with a self-contained edge vision system on a package. But once you start down that path, inevitably you're going to end up with the networking companies, the controller companies, the automotive microcontroller companies, that are going to realize that there is benefit to be had from having a reconfigurable portion of their application processor. So I think increasingly we're going to see that and I think the default method for doing that is going to be a SIP, kind of chiplet implementation.
Zach Peterson:
Especially as a packaging manufacturing capacity starts to, I guess, broaden out of one part of the world and then possibly come back here to the US and possibly come into Europe, I think, or at least my hope is that that type of design approach for a product development team really becomes much easier and the market matures and then eventually you can just go to Efinix and say, "Hey, we need to order 200 of this dye and we're just going to put it into our package and we're going to design the package."
Mark Oliver:
Yep. And we actually have customers that do that. We have one large customer that we ship dye to, they have entire libraries of custom instructions that they've come up with for our RISC-V processor that...
Zach Peterson:
That's so cool.
Mark Oliver:
...Tailor the FPGA to any number of their products. So they can start a product going down their line and depending on the, I was going to say software load, but the hardware custom instruction load that they put into that chip, they end up with one of hundreds of products that come off the end of their line just because it's customized and it's optimized for the particular SKU that they're putting together that day.
Zach Peterson:
I see, I see. That's so cool. Well, I think we're going to have to wrap it up because we're getting up on time, but this is such a cool look into what's going on in the semiconductor industry and I was talking with another guest previously and we kind of made the remark that, we're beholden to you guys as board designers and we have to know what you guys are up to and what you're doing and we can't be caught blindsided. So I want to thank you for coming on the podcast and discussing all of this with us. This has been really informative and I hope for all the folks watching out there, it's been educational.
Mark Oliver:
All right, no problem, Zach. My pleasure. And I'll come back anytime you invite me.
Zach Peterson:
All right, thank you so much, Mark. We've been talking with Mark Oliver, VP of marketing at Efinix. We hope you'll all subscribe, make sure to leave some comments on the video, and you'll be able to keep up with all of our tutorials and podcast episodes as they come out. Thanks again everybody for watching. And last but not least, don't stop learning, stay on track and we'll see you next time.