I am not sure the best way to go about getting the word out on this but I have came up with I think a very useful idea that could possibly be implemented, but I am not familiar much at all with how the firmware of smoothie operates or the protocol it uses.
I am using my knowledge of drone building with use of sonar systems for object detection and avoidance. Note I did not write this code either, but it is all opensource and would love to reference it to write something for Smoothie.
My thoughts are to implement a cheap HC-SR04 Sonar on the bottom side of the gantry. This can scan the build plate per layer as the printer prints. Plot the X and Y data points to where it sees the closest objects (can be turned off until say 5-8mm of build height is achieved to make sure it can detect variations) and if a part is outside of the given area of extrusion by a certain tolerance (maybe another process that the altmega or an external system such as a raspberry pi can scan the gcode and find an “area” of extrusion which would be where the part is) and then this can be altered as the gcode is fed to the printer from the altmega or raspberry pi that this area is no longer active and to take a path around it.
I think a first step to seeing how well this could be implemented would be to first make the print abort if it does see an error on the build plate. It would prove we can get the sonar to detect errors and the smoothie could act on it.
Thoughts? Or am I just crazy? There are a lot of other details but I can only type so much at work without getting in trouble. Please feel free to chime in any input on this.
Imported from wikidot
That’s definitely not something we have the power or space to do on the Smoothieboard.
You could have the host computer do it by adding this feature to something like pronterface though.
We I suppose I wasn’t specific enough sorry. I was planning on using an external source (raspberry pi most likely) to control the sonar, interpret the data it brings in and such, but as far as the pi and the smoothie board communicating in terms of the pic modifying the gcode before it feeds it to the smoothie board.
I use a raspberry pi with astroprint already and may just get a small add-on board to be used with the astroprint streaming service and the pi to control all of this other complex stuff. So let me rephrase, is it possible to get a pi to interpret this and do the corrections do you think? Or is this not so much a relation to smoothie anymore and more just a general operation for any 3d printer control board?
I think a raspi can do what you want, and I think it would work with any unmodified controller board/firmware.
While the idea is cool I’m really not sure it has any real world necessity though.
There is a need to detect workpieces on CNC mills and laser cutters to position origin before machining them, that’s something that would be very useful to a lot of people. We are planning on doing something similar using cameras, and we might have some money to pay for this sort of development, so don’t hesitate to contact me at email@example.com to talk about it.
Well I ran into an issue and an area of concern for this with 3d printing, which could also I feel be implimented for CNC.
For example I was printed a 15 hour print with 12 parts. One part came off of the bed and I had to fail the whole print while I had 12 hours completed and a lot of filament wasted. So the idea to avoid a failed area/part would be very nice to have to avoid this problem.
A camera would also be a great option but I assume it may be a bit more costly than a cheap sonar module. I assume you would want to use something in the terms of a lidar?
Nah, no lidar, a camera for our purposes is only a few $, it’s just to detect parts. For high precision positioning we’d add a microcope lens on top of that, but it’s still not much.
However a sonar would help us with actually detecting height and where things are positionned in the work area, automatically
Hm, I an not sure how accurate an HC-04 sonar can get in terms of mm, since I use them on a foot or meter basis with my drones.
A camera maybe be a better option for keeping check of parameters that are effected. I assume adding a tolerance of 3-5mm clearance for an effected part to ensure a clear path for the nozzle to not interfere with the print. Just not sure if a HC-04 can get that accurate.
How bulky is the microscope lens? We would probably not be playing with not a whole lot of height (maybe 20-25mm of clearance from the bottom of a gantry to the heater block)
microscope lens is about 20mm diameter, and needs to be about 3mm above what you want to observe
Interesting. I will look into it. Also last thing, I won’t be able to mount the camera or sonar directly over the head since the hotend will be in the way. I assume we can still set a scan to perform after each layer is made. Can a script be performed before every layer? That way you can offset the gcode path of the head to be the camera. I am sure it would be a quick scan too.
Whatever software you are going to be writing for this would take care of this yes, by sending gcode to the smoothieboard.
Okay, awesome. Thank you for all the help. Seriously. I will have to read up on what all I would have to do further. Sadly I have practically no code writing experience (aside from gcode) so this will be a learning curve. If I get onto something I may send an email to discuss anything further. I would like to make sure what I do is compatible with smoothie first.