If you've seen the movie Star Wars 2/5 (The Empire Strikes Back) you'll know that things don't end that well for Han Solo. He finishes up entombed in "Carbonite". Now, thanks to the magic of the Kinect Sensor and 3D printing we can all get the same treatment.
I've written a little program that takes the output from the Kinect Depth camera and renders it into an STL model that can be 3D printed. It uses some fairly simple averaging and filtering and seems to do a perfect job of rendering all of my chins in lifelike detail. Oh well.
Above you can see the program in action, pointed at my less than tidy office. You can set thresholds for the front and back of the 3D region to be rendered, and also control the width of the printed item and how strong the relief is. You can even take selfies.
I've popped the whole thing on GitHub, you can find it here.
We are going to set the system at the Freshers Party next week so that we can print out little frozen models of all our new students.....
Update: Rob (great name that) Relyea has pointed out that the Kinect Fusion tool does a great job of capturing 3D objects and will export STL files, It works very well, and the way that you can move the sensor round and add detail is very impressive. You can find it via the sample browser once you have installed the Kinect for Windows 2 SDK.
However, I wanted to find out just how far you could get with a single sensor and fixed viewpoint. I also wanted to produce solid print ready output very quickly on a large scale, which is what my program does. I'm very impressed that with just a bit of averaging you can get such good results just from the depth sensor.