Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The fact that this only works in the dark made me wonder, how much of what the Kinect does could be possible on an iPhone, and what would be needed to get there? 2 Cameras? What else?


If the phone had two cameras spaced some distance apart then you could use stereo vision algorithms to get a similar result. Maybe it would be possible to put an infrared laser on a phone, similar to the Kinect, but I expect that power consumption would become an issue (although if it's only in a momentary blast it might be feasible).

There are other possible methods, such as the "photo popup" created at CMU some years ago, but these rely heavily upon "dodgy heuristics" and often fail to give a good result.


If your subject doesn't move, two cameras can be approximated by taking two pictures a set distance apart.[1]

Also note that the method used by this app is very short ranged, and uses the LCD's backlight while it's imaging, so there's zero feedback, making any kind of interactive application impossible.

1: There are third party accessories to make this easy and repeatable, built with varying levels of quality. Here's a fairly cheap one, circa 2003: http://www.dansdata.com/photo3d.htm


Or what about using built-in motion sensors to record the relative camera location and orientation of a series of frames captured with an e.g. iPhone camera? I don't know exactly how the accelerometers and gyros work, or what sort of data they provide (linear distance vs just orientation changes?), but imagine holding down a "scan" button as you simply swing the phone around a subject to capture a series of images. I would think it would be possible to reconstruct 3d surfaces (at least under suitable illumination conditions, I guess) given known camera location/orientation for each frame. Pushbroom stereo, in remote sensing parlance...


Tracking movement in 3D using dead reckoning is apparently very inaccurate, with the iPhone sensors I wouldn't expect it to be accurate for more than a few seconds at best. I visited a startup working on the problem a few years ago, and they had problems even with dedicated hardware.


You could interleave the illumination of the subject with each photo taken

1. Illuminate 2. Take photo 3. Show photo

Repeat.

If step 2 is really short, you could have the illumination phase be much shorter than the show photo phase and the user of the phone would see it as 4 short flashes over their own photo.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: