Want to talk about dead ends. I am a software guy.

Want to talk about dead ends.

I am a software guy. I have used open-source software quite a lot. I have contributed to some open-source projects (CVS, Apache Tomcat, Mortbay Jetty, and others).

While the present web is critically dependent on open source software, the simple fact is that most open source software projects fail. Some never get interest beyond their creator. Some get an initial pulse of interest, then fade.

A very, very few make it to large use.

As a software guy, when I choose to use open source software, I am placing a bet. If the project behind the software dies, that will prove a problem for my work. So I need to be able to estimate the long-term future of an open source component.

There are tell-tale signs…

If I am going to invest the time to build a printer or write software for a particular set of hardware, I would like that investment to pay off over a meaningful period. So I want to avoid things that are about to die, or that simply have no real future.

To be clear, I am not interested in the extreme low end of the market.

My time has value. I want a printer a notch better than the current crop. I want a faster, larger, more accurate printer at a modest price. That is my current design exercise / challenge.

A better printer is going to need a lot more compute.

When a ~$35 Raspberry Pi 3 offers quad 1.2Ghz CPUs, and 4GB of memory (very nice), I will be hugely skeptical of a platform that offers much less, and costs much more.

The 8-bit Arduino is stretching out to 32-bits, but is still scarce on memory, compute, and market. This looks a lot like the story with 32-bit Z-80s.

There are niche boards … that do not seem likely to go anywhere.

Smoothieboards are pretty cool. Better than an Arduino, but still skimpy on CPU and memory. After an impressive start, the project seems stalled.

This is a common pattern in open-source software projects. The sort of pattern I want to avoid when adopting a component into my work.

There is the “Duet” board, which is also rather nice at the hardware level, though there seems to be issues with the software. CPU and memory are rather limited. Adoption is also limited. Taken together, this looks like pattern to avoid.

Keep in mind, a ~$35 Pi 3 is running about a billion dollars worth of software. If your new board requires new software, what is the cost?

From a hardware perspective, these choices may look equivalent. From a software perspective, they are not at all equivalent. And software most always wins, as was dramatically shown as far back as the introduction of the first PC from IBM.

The pi is more speed than necessary for a 3d printer, but less I/O capabilities. It doesn’t even have an ADC. At the same time, it’s too slow to handle a demanding slicer.

A 2d printer doesn’t have a crazy high end cpu in it. Even a high end laser printer typically has a 400MHz cpu and 32mb of ram (which is mostly used for print job storage, not processing).

The 240MHz 32 bit mcu’s hit a nice balance of speed and i/o. Sure it doesn’t have 4gb of memory, but if you need that much then you’re doing something wrong.

Smoothie is developing a v2 which uses a coprocessor that will handle incredibly fast and accurate stepper timing.

The lookahead on a 3d printer is small, it doesn’t need to see 400 moves into the future because it’s a physical device, you have plenty of time on a 32bit to do any required calculations. What you don’t have time for is to wait for slow gpio or for some interrupt that throws off the timing. It needs to be precise and syncronized.

What most controlles are missing right now is math. The 8bit don’t have enough cpu cycles to do everything before the next stepper pulse. They also don’t have the acceleration and pressure advance algorithms fine tuned yet. We’re still missing many feedback loops.

While the 8bit is insufficient, the 32 bit is quite capable. It doesn’t need more compute, it needs better compute. That means software, not more hardware. Before you start building an entirely new solution from scratch, take a look at current solutions and see how they perform. What features are they missing and what exactly makes them incapable of reaching those high speeds and accuracy. Focus on that before, and you should get a better idea of the requirements should you need more.

I don’t think you have your facts straight when it comes to these things in my opinion.

When it comes to controlling a printer, I wouldn’t say that the LPC chip on the Smoothieboard is skimpy. It runs deltas at 300+mm/s fine.

Smoothie is also working on a V2 in current development.

The duet is the same story. It’s an incredible board and I’ve never heard anyone complain that it’s down on computing power.

Heck the new Printrboard does calculations into snap crackle and pop. Some crazy stuff going on there.

When it comes to the motion, I think we’re at a point where the M3/4 chipsets are going to be plenty for doing just that. Even delta kinematics which are if I remember correctly the most complicated due to their motion system and need to go from parametric to polar still seem to run quite well on current boards.

How can an argument that a pi is a viable board be at all valid? It only has the gpio’s. No headers, MOSFETS, drivers, or any of the other peripherals needed. That stuff isn’t cheap. I know for a fact that the IC on a smoothie board costs less than some board’s drivers.

What is the excess power needed for?

I too am skeptical when it comes to open source projects, but it doesn’t necessarily make it a risky purchase, especially with hardware. If the files are out there and the company goes under, it can theoretically still be developed further.

@Stephanie_A beat me to it by a minute haha!

The real big problem with open source software is that developers have the tendency to start something new from scratch instead parallel instead of supporting the already existing things. If they all would bundle their resources into one project instead of 1000 we would see much more progress.

Second : There is actually in fact no limitation in computing power. Try to build a printer that will reach the computing limits of a smoothie board while delivering reasonable quality.

Third : if you try to reach the limits with your actual build you will produce spagettie’s for a few minutes and then your printer will disassemble itself.

Fourth : It seems that you’re using the same approach that pure software guys usually like to use. Don’t care about the hardware. Just try to solve everything by some kind of software and then move the problem with the hardware to someone else. Even though it’s outdated today , compared to smoothieware I.e., you should at least have a look into the marlin firmware. It’s quite impressive what they squeezed out of this tiny 8bit controllers with smart programming. And finally you get at least an impression of how things are rolling. Instead of guessing the whole time :wink:

Duet is an awesome ecosystem. The main firmware developer is very precise when it cones to making sensible features. He releases several new builds per week so the wifi connectivity issues should disappear pretty soon. In my opinion it is the best option and has some features that even smoothie doesnt have.

Whatever your objective, I still suggest you USE a Duet before trying to make your thing that you think will leapfrog what’s available. You’re not going to leapfrog the state of the art if you’ve never exposed yourself to it to understand how it operates. Then there’s the PrintrPoard G2 with its sixth order motion control. Microcontrollers aren’t meant to run a conventional operating system either, so conventional operating system expectations don’t fit. I also don’t think the assumptions coming from general purpose computing paradigm necessarily makes sense on a special purpose device but that’s a different conversation.

@Preston_Bannister You’re putting the cart before the horse. 3D printing is a hardware application, full stop. The fundamental objective is to drive motors to push bearings and heaters to melt plastic to reproduce a physical object with the desired real-world geometry. 95% of the printer’s speed and print quality is determined by hardware selection and design, not control algorithms.

Nobody really cares if the motors and heaters are controlled by a RPi or a Beowulf cluster or a stack of punchcards. The specific choice of electronics makes less than 10% difference to the total machine cost and less than 2% of the design/engineering effort. The needs of the hardware come first, and the software adapts, or nobody uses it.

Now, there ARE many valid discussions about what shortcomings Marlin and Smoothieware and RepRapFirmware have, but you don’t know enough about 3d printer control algorithms to comment on those intelligently yet. There ARE software issues and processor bottlenecks, but they’re really on the margins – talking about maybe a 10-30% difference in theoretical speed or print quality. Then you’ll hit hard physical limits of the real-world process being controlled. It’s not like we’re going to program our way to a 3x or 8x increase in print speed by switching to a faster processor. Some realism is necessary here to keep from spending a lot of effort barking up the wrong tree.

I should also note that most of the shortcomings in control of today’s 3d printing algorithms are forced by the software’s lack of knowledge of the hardware state. Resolving this requires either increased hardware instrumentation (such as position encoders and environment sensors) or use of hardware-specific calibration factors that are not easy to determine. For example, a closed-loop field-oriented control stepper motor scheme like the Mechaduino uses could be integrated with the main control code to dynamically adjust print speed to stay just below the hardware limits. But that’s MORE custom hardware, not less. There are relatively few areas where hardware-agnostic software alone can make significant advances.

Stall detection on the steppers would be great. Throw in a filament diameter sensor capable of 0.001mm resolution or greater and position feedback (within 0.01mm for x/y). While you’re at it get a sensor that measures melt chamber pressure in real-time.
And do it all under $30.

Where I DO agree with Preston is that the current batch of processors ARE holding back development of better algorithms. Remember, Smoothieboards and Duets are both running core motion planner algorithms that were written for GRBL in 2009-2011 or so to run on a late-1990s vintage Atmega328p. (They both have lots of tweaks, upgrades, and refactors, but the underlying speed control methodology is unchanged. Same for Repetier, Marlin, Aprinter, Redeem, etc…) This code is FULL of kludges and non-optimal shortcuts to run in realtime on a Mr Coffee processor. Blatant example: only tangential acceleration within each gcode segment and cornering speed are controlled, NOT centripetal acceleration.

The reason Smoothieboards and Duets can handle almost any reasonable printer hardware today is that they’re doing stripped-down motion control intended for late 1990s processors, except they’re using mid-2000s processors to do it. Of COURSE they’re going to be able to run fast. But that’s like saying a DSL modem can send faxes really fast. It’s 2017, we shouldn’t be using DSL modems, and we shouldn’t be sending faxes.

The better question, in my opinion, is why we’re doing motion planning on the 3d printer at all. This application is little more than file playback. Why does the printer have to decide how to play back the file in realtime? We should be using the big, beefy desktop computer to decide when to speed up and slow down, and then the 3d printer just has to play back a series of ramp/coast commands in realtime. Then you can do 1,000-deep motion planner queues and calculate centripetal acceleration and apply any kind of complex heat/pressure control scheme you want on the extruder.

The whole toolchain we’re using today is a throwback to a time when people thought gcode files should be hardware-agnostic because everyone would want to share the same sliced print file between multiple printers. But it turns out, that’s an edge case <0.1% of users actually want to do. Slicers are free and fast so it makes more sense to re-slice for each printer. And the use of gcode as the command protocol significantly limits what the printer can do. It’s a major cause of printer speed limits (via USB transmit of plaintext commands one letter at a time) and printing errors (via excessive command rates or SD read errors).

I’ve been trying to convince firmware developers to ditch gcode for a couple years now – either via a spline path format (which facilitates centripetal acceleration calcs) or a pre-accelerated format (which shifts most of the performance-critical calculation to the desktop computer).

@Stephanie_A
If I remember this correctly then could already use stall detection. I think there is such a feature in the tmc drivers by measuring the back emf. But the driver will not tell the controller that the motors stalls. Instead it will try to increase the current within its limits. So the intention is basically to run the drive train more efficient.

Regarding the other 2 things :
We have sensor in our lab that could measure the pressure directly. Price tag : 1500€
We also have measurement device that can measure up to 5mm with such an accuracy. Price tag : 50000€

I have the impression that we are far away from affordable solutions for sub 1000€ printers :smiley:
However, I think one day we’ll see a clever person with a smart and cheap solution in his hands.

Have you seen the tests with the force sensors on the delta effector? Quite interesting!

@Stephanie_A @Sven_Eric_Nielsen You can actually derive most of what you want using load angle detection on the steppers like the Mechaduino has. The more difference between the rotor encoder and commanded position, the more load the motor is experiencing. It’s trivial at that point to keep the load angle within whatever limit you want, such as <=1 full step to prevent stall, or even calculate an actual FORCE exerted by the motor. The problem is that you need to keep all four axes in sync, so the main controller has to do the slowdown, not the Mechaduino. That requires more back-and-forth communication than is currently possible in the software for the motion controllers or the Mechaduino. (This is the sort of thing I’d love for people like Preston to work on.)

You can do a LOT with that info, without needing any more sensors. Load angle of the extruder motor gives you the viscous drag through the hot end. You can do super useful stuff with that, including dynamic speed adjustment, dynamic hot end temp adjustment, or even history-matching to tell if the filament diameter has changed. Load angle of the XYZ motors gives you a portion of the position error due to acceleration forces, which you can use to dynamically adjust speed to trade off ringing versus print times.

The above all assumes you have a pretty accurate rotor position encoder like the Mechaduino. In comparison, StallGuard type techniques use the driver to detect instantaneous back-emf, which depends on rotor position and speed, independent of commanded position and speed. Basically the difference in phase between the back-emf and drive current waveform gives you a primitive means of measuring load angle. But the voltage measurement can only be done during zero-crossings in the coil current waveform, meaning only once per full step of motion. And back-emf magnitude depends on rotor speed so you only get a clear signal when the motor is spinning at a non-trivial speed for several full steps prior to the event. Which means StallGuard can’t be used for precision tuning applications; only detecting large issues in gross motions. That’s much, much less useful for dynamic tuning.

@Ryan_Carlyle
You mean the next logical step after this one?


I guess most people are not so happy about the relatively high price level of the mechaduinos. I don’t say that they are in general too expensive. I mean they are probably too for people with a traditional <50€ ramps equipment :wink:

But this is an open source project, and maybe there is enough power left in an duet to handle this directly? I don’t know. But even if this would require an additional teensy for all axis which then sends the data to the duet or smoothie board it would much cheaper than the mechaduinos.

By the way, last year I’ve developed an analog rotary encoder based on a linear hall effect sensor and a 16bit adc. So in theory an even higher resolution than the mechaduino. But this was not directly for a motor, so no disturbance by the magnetic field from the motor. I don’t know if this would be a problem on steppers. I’ve never measured the magnetic field around a running stepper motor. Are they saturated?
Unfortunately it was for the company I’m working for. So I’m not allowed to publish the code. But it was quite easy, accurate and damn cheap (<10€ incl. A arduino nano with almost all computing power left).

@Ryan_Carlyle ​ I completely agree that the majority of the planning can be done in the slicer. In fact it would be much better if we switched to a model where the slicer directly polls the printer for it’s capabilities. Again the inkjet printer topology comes to mind, send the file to the printer driver which slices for that printer, and sends the movement and heater commands to the controller. Set your print settings in the driver, the drver knows the speed, heaters, nozzle, and all the other capabilities of the printer.

Instead right now we need to put every little detail of the printer into the slicer. The slicer should know this automatically!
If we remove all the garbage from the controller then it can handle movement with ease. No more conversion of kinematics done by the controller since the driver knows what it is. It simply says what motor to move and when.

In the same way, having one mcu handling communication and another handling movement accomplish es the same thing (this is what smoothie v2 will do). There was even an experimental marlin that used 2 mcus. All that second mcu did was take in an array of timing for the step/dir pulses. No more calculations and everything then happens exactly when it needs to.

The problem is that nobody wants to move away from gcode. Doing so requires an entirely new ecosystem. The slicer, host, and firmware all need to be changed at once.
I have proposed an intermediate step where current gcode output is parsed into a new format for consumption by the printer. It’s the easiest way. People get all upset when you threaten to take away gcode, if even for a binary format translation for faster data transfer.

@Stephanie_A there’s already an open-source binary print file format that offloads some calculation load to the slicer, and it is IN USE IN ABOUT 100,000 PRINTERS TODAY, and has widely-available post-processor options to convert any slicer’s gcode to the optimized binary format. This is totally mainstream, well-proven, mature stuff. I have three printers that use it myself. Can you figure out what it is? Hint: nobody around here will touch it with a ten foot pole.

First, have to say I am delighted by the depth and quality of the responses above. Even when you are (very politely) calling me an idiot. :slight_smile:

@Stephanie_A You are certainly right in saying the Pi 3 likely has more compute than needed for the upcoming problems - or so I hope.

I spent the 1980s and much of the 90s doing the software equivalent of origami - trying to fold big problems to fit into small computers. That was not as much fun as it sounds.

Might be in the end I will find that the Pi 3 has more CPU and memory than the problem requires. (Though at ~$35, less is not more.) Better than finding in the middle the compute/memory is insufficient, and having to do origami.

Also … running slicer on the printer does not make sense to me. Sure, if all you want to do is print a one-inch figurine, a dumb slicer might be fine. Might throw in a slicer on the printer just for demos. Otherwise, not my main bet.

@Sven_Eric_Nielsen Keep in mind the open-source projects on which I bet most heavily are still around and in main-stream use (for CVS, a third of a century later). That is not just dumb luck. :slight_smile:

Also, just ordered my controller boards, ahead of when needed. (Still have at least a couple of ~10 hour prints, and more if my current experiment needs more iterations - which is likely.)

@Griffin_Paquette Also keep in mind I am looking ahead a bit. Once I get the current experiment (building a perhaps slightly-better printer) working, the next experiment is in making smarter the software driving the printer. When a cheap Pi 3 gets a abundance of compute and memory, I will start there.

@Ryan_Carlyle As to 3D printing as a hardware application, well “yes”, and really profoundly “NO!”. The progress in the needed hardware from the early RepRap days to the present is impressive. The field very much needs appropriate hardware.

Rewind a bit. Draft a design using CAD software. Output STL. Run the STL through a slicer (Cura in my case). For complicated prints you are (at present) going to use human acquired-expertise to make tricky prints possible, and of quality. The slicer outputs Gcode, which is then run through an interpreter that knows something of the hardware.

The above software chain is substantial.

You are also right when you say I do not yet know enough about the present state of the art. That is why I am building a printer, as a learning exercise. Also I am running a risk in doing this all while very public. Will be hard to hide my mistakes. :slight_smile:

As a software guy, I spent a lot of time choosing the right abstractions, and the interfaces between those levels of abstraction. If I choose badly, that can cost a project a lot. Frankly, I am very good at that sort of judgement.

Looking at this problem, I have an itch in the back of mind telling me the layers are wrong. So I am building a printer…

@Sven_Eric_Nielsen Duet is maxed out by RepRapFirmware’s exact step pulse timing. At high speeds, it has to double/quad step like everybody else. As I understand it, the Duet Wifi/Ethernet could get a lot of processor head room back by optimizing the code for the floating point unit on the ARM M4F, but that would require more code forking between the M3 Duets and M4F Duets, which last time I checked was pretty low on David’s todo list. Maybe the ESP8266 on the Wifi could do some work? It’s a pretty beefy microprocessor itself.

Smoothie v1 is ok on clock cycles as long as the gcode command rate doesn’t get too high, but it is terminally out of RAM for all practical purposes.

32bit Repetier on Due is in pretty good shape for RAM and clock cycles, but the need to maintain general code compatibility with 8bit Repetier makes it impractical to do major algorithm upgrades.

I have hit performance caps on all of those. It’s not hard to do. Auto-leveling on a delta with a GLCD during USB printing of curvy shapes at high speeds will easily peg them all out and start showing pathological behavior as the motion planner is unable to keep up. The most obvious issue is planner queue depth – if your gcode segments are 0.1mm in a faceted curve and your buffer is only 16 moves, you cannot do high-speed motion in any sensible way. The time to slow to a stop becomes longer than the lookahead distance. This gets bad in a hurry with Smoothie Deltas in particular because the delta segmentation eats up the motion planner buffer. Repetier makes the planner deeper to account for segmentation because it has RAM to spare. But deeper planner buffers create vastly more processor load because the whole queue has to be iterated through every time a segment is added. The whole thing is a bit of a mess.

@Preston_Bannister ​ you can do like I said, get a good 32bit mcu to handle movement, then use the pi to handle calculations. You could use an off the shelf board like smoothie or duet, then write your code for the pi that handles all of the translations. Use a fast interface (spi, or usb/whatever).

Now you’re no longer designing new hardware, and the pi could be replaced with anything that has the desired interface. It frees up the mcu to do what it’s good at, which is gpio, and the computer does what it’s good at, calculations. Best part, the calculations don’t need to be in real time because you have tons of memory and fast disk speed.

@Ryan_Carlyle ​ it isn’t s3g is it?

@Stephanie_A x3g, basically the same thing as s3g, yeah. Every Sailfish printer consumes binary x3g generated by a post-processor after gcode creation. It pre-calculates the step counts for each move, which saves the 8bit MCU quite a bit of floating point division. That’s one part of how Sailfish can print about twice as fast as Marlin with the same hardware and print quality. (It’s also a hyper-optimized codebase, while Marlin is intended to be legible, not fast.)