I was thinking if we could build an all-in-one machine that could with a

I was thinking if we could build an all-in-one machine that could with a simple press of a button scan a piece and 3D print it automatically. I’ve seen 3D scanners out there, which one is the easiest to automate? Kinect based, photo based, David Laser? Say we could add a rotating platform and a webcam that could take a picture at as many angles necessary and use a software (123catch for example) and then feed that to a fix up software, then to a slicer, then to the printer. What resources do we need to do that? I thought a Raspberry Pi would work but I guess I was wrong (I’m a newb to 3D printing, but not to electronics). So what do you guys think?

Mh - dunno if a Pi is capable enough, but you could use a small ITX Mainboard and add the Scanner into a Makibox Case for example and put that on top of a Makibox LT. (yes i work for them ^-^) I don’t know if 123 alone will be enough, but maybe you can put the 123 output into the netfabb repair cloud and then slice that via slic3r on the console.

Given the current capabilities of 3d scanners, I guess a simple one-click-approach wouldn’t work right now, since the scanned model will certainly need some cleanup first. Also seconding the concerns regarding the pi’s processing power. However, in principle it should be possible to create something like that

The kinect is limited in that it’s meant for a larger view throw. It also has a lot of artifacting as the extrapolated data is pretty noisy. It’s good for macro-scans (peoples faces/bodies) but not much else. But hey, think about it: that’s exactly what it was designed for.

I see. So far, either David Lasercanning or 123catch are the best options for a small enclosed space in the machine. I will soon start to play around with 123catch to see how it does the job with a solid color background to minimize the noise. I guess lighting will also play a role in eliminating shadows to keep the noise low. A microcontroller can control the rotating platform and the webcam shots. Of course, later on, we can integrate that into the mega that controls the printer. When I’ll receive my MakiBox I’ll try to integrate the scanning into the printer, by installing a belt driven rotating platform under the build platform (that will be raised to the max point) and placing a webcam in a corner of the box and cover the walls with a white mate background. Seems easy to do but I’ll know more when I’ll see the box.

You can’t use stereophotogrammetry (like 123d catch) with a rotating platform, and you can’t use it with a solid background.

123d catch needs to be able to identify patterns in the background to track rotation by paralax motion. Newspapers are the best background, as they create a lot of unique, identifiable patterns. You also can’t use a turntable because moving the object being scanned causes the light and shadows to change, making it impossible for the algorithm to properly identify features.

If you want to use a turntable, you have to use laser/structured-light scanning. Kinect uses structured light, but as @ThantiK mentioned, the throw distance is not suitable. For the type of scanning you have in mind, laser-line is probably best. Take a look at @Tony_Buser 's SpinScan: http://www.thingiverse.com/thing:9972

I see. So in my setup I can use only David laser scanner technique, which is cheap but introduces lots of scattered points that need to be cleaned. I’ll have to read about the structured light scanning to see how it works and if it’s more effective and what the costs are. Any other ways to do this in such a small confined space? A process that can be automated? Thanks.

Perhaps an Android tablet instead of the laptop or RasPi. The heavy processing may be done online or remotely, but the printer can be connected to the tablet and the user should be able to remotely control and even watch through the tablet’s webcam how the print is going.