One of the biggest advantages of HTC Vive comparing to Oculus (at least for now), is that HTC Vive offers a room-scale experience, allowing players could actually walk in 3D environment.
So the experiment here, is what might be the best way to recreate a real 3D environment and build on top of it to create an AR experience.
Through research, these methods are quite popular: timesensor based solution (Kinect/Tango), photogrammetry, and Lidar+Mari texture mapping.
In my experiment, Kinect/Tango seems to capturing geometry better and faster, but sucks at creating photo-realistic environment. Tango is especially fun to capture live moment like two people having a conversation.
I tested the scan with Tango’s Tango Constructor as well as building an unity project based on this documentation.
Photogrammetry is a technique that requires multiple photos taken from different angles to recreate 3D models.
Popular softwares include: Remake, Photo Scan, CapturingReality, 123d Catch.
It turns out this technique could provide really realistic results. But it is normally way too high poly to implement in VR and normally the geometry results are not so perfect.
For example, you could clearly see holes in the 3D model below. It kind of sucks while dealing with plain white walls because it does not contain enough infos.
My later project is based on this technique (detailed documentation to be added.) People could stand in the actual room while wearing the headsets. The virtual room is 90% mapped to the real room. So when you touch a table in the virtual world, you will also be touching the real table.
In order to make it more “AR“, I also add some dragons/fire in the scene, so that the whole scene looks more apocalyptic. People could use bow and arrow to shoot the dragons to defend themselves.
Lidar is too expensive for me to get one and try it out. So I only did some quick research about how to texture it with Mari.