LiveScan3D: open source, real time 3D reconstruction using multiple Kinect v2

  Рет қаралды 29,901

Marek Kowalski

Marek Kowalski

8 жыл бұрын

LiveScan3D is a system designed for real time 3D reconstruction using multiple Kinect v2 depth sensors simultaneously at real time speed. The produced 3D reconstruction is in the form of a coloured point cloud, with points from all of the Kinects placed in the same coordinate system.
For more info check out my website: home.elka.pw.edu.pl/~mkowals6/
Or check out the project's GitHub website: github.com/MarekKowalski/Live...

Пікірлер: 25
@NickMariaDoStuff
@NickMariaDoStuff 7 жыл бұрын
This is so close to being exactly what I was looking for. I had it in my head to set up several Kinect v2 around a room and use them to record an event (say a wedding) which could then be watched back in VR so you could walk around the event after and see it from different angles. A person could then have either a "video" file to run on a general program or an event app which just had their event as an executable file.
@MarekKowalski1
@MarekKowalski1 7 жыл бұрын
Hi! We actually have a Unity project that can be used to display the point cloud from LiveScan3D in VR. You can find it here: github.com/MarekKowalski/LiveScan3D-Hololens You can also playback previously recorded sequences using the LiveScanPlayer tool available in the original repo, here: github.com/MarekKowalski/LiveScan3D
@adelarscheidt
@adelarscheidt 7 жыл бұрын
Man, that is reeeeally cool
@janetech6058
@janetech6058 3 жыл бұрын
I used the ICP algorithm to merge cloud points, but there is a problem when extracting frames during RAM filling and closing the application immediately, your solution is to capture frames for model building process. Thanks for reading comments and it would be nice to hear your solution.
@sheridanis
@sheridanis 8 жыл бұрын
Hi Marek, How are you doing the alignment/calibration of the point clouds? is it a manual process or are you using something like cloud compare? James
@MarekKowalski1
@MarekKowalski1 8 жыл бұрын
+James Sheridan Hi James, The point clouds are aligned automatically using visual markers. You print the marker on a piece of paper and attach it to something rigid. The Kinects see the marker and infer their relative poses from it. We also plan to include calibration that will use the skeletons detected by the Kinect For Windows v2 SDK. When this feature will be included, no markers will be necessary for calibration. Marek
@sheridanis
@sheridanis 8 жыл бұрын
+Marek Kowalski your just using the checkerboard detection in opencv? is there any documentation/instructions for this or if I try and run things or look through the code it should be obvious?
@MarekKowalski1
@MarekKowalski1 8 жыл бұрын
+James Sheridan Hi, we use our own custom marker system. We do not use the checkerboard, because we want to be able to distinguish between different markers that are present in the scene. There is a manual for the software included in the files, and starting everything should be straightforward. Some settings are still not included in the manual, but we will supplement it soon. If you want a more technical description, there is an article from 3DV 2015 that should be published soon. It is titled: "LiveScan3D: A Fast and Inexpensive 3D Data Acquisition System for Multiple Kinect v2 Sensors"
@sheridanis
@sheridanis 8 жыл бұрын
+Marek Kowalski - Thanks a heap Marek and top work
@MarekKowalski1
@MarekKowalski1 8 жыл бұрын
+James Sheridan Thanks a lot James.
@GhassemTofighi
@GhassemTofighi 8 жыл бұрын
1) What are the positions of 3 cameras in the room? 2) At least how many cameras I need to reconstruct the whole room 360 degrees? 3) How many camera could be processed in parallel in real time? For 3 camera point cloud it seems 25 fps which could be considered as real-time (30 fps is realtime for me)? Thanks
@MarekKowalski1
@MarekKowalski1 8 жыл бұрын
+Ghassem Tofighi Hi Ghassem, thanks for the comment, as for your questions: 1) The positions of the cameras are different in each of the sequences in the video. In the first sequence there are three Kinects located very close to each other but facing different directions (outward facing config). In the other two sequences the Kinects are located around the object at around 90 degree angles (inward facing config). 2) If you use the outward facing config, you will need 6, since each Kinect has a horizontal FOV of 70 degrees. Using 4 Kinects and placing one in each corner of the room seems more reasonable. 3) In our system, each Kinect is connected to a separate computer, so in terms of processing power, including more Kinects is fine as long as you have more computers. The thing I would worry about when adding more Kinects is interference, the more of them you have looking at the same scene the more likely it is you get interference. This does not happen as much in the outward facing config, since the overlap between the devices is small.
@teamdas3dstudio368
@teamdas3dstudio368 4 жыл бұрын
@@MarekKowalski1 Cool project! Maybe you know this approach towards interference? www.precisionmicrodrives.com/content/using-vibration-motors-with-microsoft-kinect/
@sakshamchaturvedi2549
@sakshamchaturvedi2549 4 жыл бұрын
Please teachbis how to built that
@brianhandy
@brianhandy 8 жыл бұрын
Great looking results! I got a couple Kinect v2's working in one process with libfreenect2, and it didn't take too long to set up. I'd be curious to hear your results working with that library too, since then it's from one machine. Doesn't seem to have as much interference as you have referred to, but I'm only just at the start of this stuff. donotunplug.tumblr.com/post/143077899182/kinect-videogrammetry-step-one
@MarekKowalski1
@MarekKowalski1 8 жыл бұрын
+Brian Handy Hi Brian, thanks a lot! Getting the app to work with libfreenect2 would indeed be very interesting. It would simplify the setup (less computers required) and probably increase the framerate as well. Moreover modifying my app for libfreenect should be very easy, it would only require implementing a certain interface class for libfreenect (it is already implemented for the Windows driver). In the last several months I have been very busy with other projects so I didn't have time to implement new stuff for this app and that probably won't change in the near future. If you are interested in trying to get libfreenect to work with LiveScan3D, I would be more than happy to help you!
@brianhandy
@brianhandy 8 жыл бұрын
+Marek Kowalski I think I actually already got my own code in libfreenect2 working. :) But your code is much more fleshed out, so I may switch over! donotunplug.tumblr.com/post/143541265027/kinect-videogrammetry-in-motion I have two questions for you even so though! I'm curious either way, 1) if you store the data to disk (and how), as I found no videogrammetry file formats online, and 2) how you do your alignments. A quick glance at the repo makes it look like iterative closest point, but is that a custom implementation or from a library like PCL? Thanks! And I may take you up on that offer yet!
@MarekKowalski1
@MarekKowalski1 8 жыл бұрын
+Brian Handy Hi Brian, I'm glad you like the app, hope that you find it useful :) As for your questions: 1) Each recorded point cloud (each frame) is saved to a separate .ply file. As you probably know .ply is a format used mainly for meshes and point clouds and not really a 3D video format. To be honest I don't know any 3D video formats either. 2) The camera poses that are used for alignment are obtained using visual markers. A marker is a pattern printed on a piece of paper and attached to a rigid surface. ICP is an optional step that can sometimes be used to refine the camera poses obtained using the markers. Having the option to use libfreenect as well as the official SDK in my app would be a great feature. However if you only want to use a part of my app for your own purposes (for example the marker detection and pose estimation), I believe the license allows it as well. :) Cheers! Marek
Fusion4D: Real-time Performance Capture of Challenging Scenes
5:47
Microsoft Research
Рет қаралды 70 М.
3D Scanning of a Cottage with a Phone
3:34
matlabbe
Рет қаралды 118 М.
The child was abused by the clown#Short #Officer Rabbit #angel
00:55
兔子警官
Рет қаралды 25 МЛН
Я нашел кто меня пранкует!
00:51
Аришнев
Рет қаралды 4,3 МЛН
DynamicFusion
6:31
Richard Newcombe
Рет қаралды 64 М.
Open-source telepresence on HoloLens
1:46
Marek Kowalski
Рет қаралды 1,8 МЛН
How the Kinect Depth Sensor Works in 2 Minutes
2:00
CuriousInventor
Рет қаралды 227 М.
The Relightables Technical Video
5:00
augmentedperception
Рет қаралды 25 М.
Kinect point cloud tutorial
16:18
exsstas
Рет қаралды 29 М.
ОБСЛУЖИЛИ САМЫЙ ГРЯЗНЫЙ ПК
1:00
VA-PC
Рет қаралды 1,7 МЛН
Урна с айфонами!
0:30
По ту сторону Гугла
Рет қаралды 8 МЛН
После ввода кода - протирайте панель
0:18
Up Your Brains
Рет қаралды 1,2 МЛН
НЕ ПОКУПАЙ СМАРТФОН, ПОКА НЕ УЗНАЕШЬ ЭТО! Не ошибись с выбором…
15:23