This is going to come a bit out of left-field...

The key to improving a system is to first identify what the current bottleneck is and prioritizing that. So for a given technology like FPGAs, you need to ask if its advantages (such as higher control frame rates) are relevant to the current bottlenecks, and if so, do the advantages outweigh the drawbacks.

Again, this is all speculative.

I am a software/algorithms guy. While I am certainly inexpert in hardware, I am quite good a partitioning problems and finding the right abstractions.

Our current lash-ups strike me as having awkward boundaries.

My speculation is that software-controlled hardware (FPGAs or CPLDs) have became cheap enough that they are practically a wash in terms of cost, compared to special-purpose existing ASICs. Further, as I can push improved algorithms down into the hardware, there is chance of better end result. Perhaps much better.

Put differently, special purpose ASICs might be pointless, and there might be a large upside.

Also, I have used better algorithms to get very high performance out of hardware, from my first job out of college, to an more recent exercise a few weeks back. So likely to find what is possible.

I do not want ASICs with baked-in algorithms, if there is a chance that I can come up with (or find) better.

@Ryan_Carlyle In your list, I would have software running on a general purpose CPU (a Raspberry Pi3, likely) steps (2) through about (3.5). The FPGA would perform (3.6) through (5).

Feedback. About that. Would be nice. The Trimanic folk figured out how to do stall-detection. What sort of feedback would the FPGA need to determine the same, and are there usable generic circuits? (Put this in the “would-be-nice” category, but not required.)

Dividing (3 to 4) layers as I suspect there is/are a derivative(s) of motion that could be used by software and implemented by hardware.

Oh. And I expect to teach the hardware to handle that derivative.

None of the above I expect to matter much to spitting out noodles of plastic at 60mm/s (x 0.3 x 0.4mm or similar). But as rates get a bit more ballistic, maybe more of a matter.

Frankly, I suspect we are near (or past?) an inflection point, where Moore’s Law goes sideways. (As is quite usual in technology.) This particular inflection point is where software folk get to re-arrange the hardware whenever they come up with an improved algorithm. Where general-purpose programmable hardware has lower life-cycle costs than baking algorithms into ASICs. (A better algorithm first means we have to educate hardware designers, who then bake new ASICs, that get built into new boards, that get built into systems. Versus an update to existing software - that updates hardware.)

Even if I break-even in the current exercise, I win. :slight_smile:

@Preston_Bannister The other thing with FPGAs is that they do not have any RAM, which is necessary.
ASICs for entire 3d printer architecture would require mass production levels over over 200,000 printers per month to justify the cost of designing a 3D printer ASIC.
32bit ARM controllers fit perfectly in that median level of cost, performance. Even with the consideration that many firmwares are integrating a RTOS, having an even faster microcontroller/FPGA makes little sense.

Relevant talk by a Xilinx guy: