EgoLocate, In SIGGRAPH 2023.
5:22
Пікірлер
@steevya
@steevya Ай бұрын
wow... simply awesome..
@steevya
@steevya Ай бұрын
wow... simply awesome ....
@robinkneepkens4970
@robinkneepkens4970 Ай бұрын
Congratulations on the outstanding results!
@CoyTheobalt
@CoyTheobalt 2 ай бұрын
Hey cousin, I'm guessing.
@prathameshdinkar2966
@prathameshdinkar2966 2 ай бұрын
Nice paper! Keep the good work going! 😁
@fasiulhaq2366
@fasiulhaq2366 2 ай бұрын
I'm shocked
@GmMelendez
@GmMelendez 3 ай бұрын
Everyone is finding out why this is being used. To steel your will all your assets all your money in the bank. Everything your loved ones worked hard to leave you there legacy.. Are you just going to let your self hell no!!!
@BAYqg
@BAYqg 3 ай бұрын
such a clean fingers geometry. And what is interesting, why hand geometry is so destructed, when even fingers are captured well. Anyways, the result is amazing!
@XRCADIA
@XRCADIA 3 ай бұрын
Impressive
@Elektrashock1
@Elektrashock1 3 ай бұрын
Well done and novel approach. 🤙
@SuperSmyer
@SuperSmyer 3 ай бұрын
Awesome!
@alexijohansen
@alexijohansen 3 ай бұрын
I would love to join your lab!
@richard_goforth
@richard_goforth 4 ай бұрын
Unbelievable. Very impressive. Appreciate you sharing!
@xu_xl
@xu_xl 5 ай бұрын
it will be very helpful if author could share the code of this project
@endavidg
@endavidg 6 ай бұрын
4:30 Shouldn't (a) simply be called RASTERIZATION? I think calling it "Forward Rendering" is confusing because "Deferred Rendering" is also rasterization.
@leef918
@leef918 7 ай бұрын
this is full deep research ,containing popular depth device,leap motion,realsense,and other's researh comparision.
@mahdihajialilue3825
@mahdihajialilue3825 9 ай бұрын
nice work
@crestz1
@crestz1 9 ай бұрын
whats the diff between this and neuralangelo? both appears to use first and second order derivatives
@chillsoft
@chillsoft 9 ай бұрын
This is really cool! One question, you say "capture dataset with realistic face deformations acquired with a markerless multi-view camera system", does that mean we will have to have an array of cameras once the code drops to reproduce this? How many and what quality cameras do we need, does an array of iPhones suffice? Great research, thanks for sharing!
@Bellberuu
@Bellberuu 10 ай бұрын
Wowww that's so cool!
@well5423
@well5423 10 ай бұрын
Amazing reconstruction quality! Bravo.
@birukfikadu-ni8ph
@birukfikadu-ni8ph 11 ай бұрын
Please tell me where i can experience
@topgunmaverick9281
@topgunmaverick9281 Жыл бұрын
🤟 Great
@goteer10
@goteer10 Жыл бұрын
It's incredible how it can work with it's own skeletal tracking input and still get such amazing output! I'd imagine with more accurate skeletal tracking data gathered seperately (With either IMUs or markers) it'd almost completely weed out the few edge cases where the renderer gets fed incorrect skeletal data (Like arms teleporting or skewed hands) I'd love to see if it could handle hands in the future
@pinas.passport
@pinas.passport Жыл бұрын
The end of photography 😅
@jimj2683
@jimj2683 Жыл бұрын
Has anyone tried to use synthetic data from a game engine to train a neural network? With enough 2d vs 3d data it should be possible to reconstruct most objects/scenes in Unreal Engine or similar.
@beautyfitnesschannel6639
@beautyfitnesschannel6639 Жыл бұрын
Great, where can experience
@ChangPhlat
@ChangPhlat Жыл бұрын
wow
@TinNguyen-wx4fq
@TinNguyen-wx4fq Жыл бұрын
Good Job!
@dietrichdietrich7763
@dietrichdietrich7763 Жыл бұрын
amazing (powerful stuff)
@dietrichdietrich7763
@dietrichdietrich7763 Жыл бұрын
Awesome
@yangchen8602
@yangchen8602 Жыл бұрын
Great Talk! Thanks for sharing!
@petixuxu
@petixuxu Жыл бұрын
Could this be done with an stl of a figure?
@absoriz2691
@absoriz2691 Жыл бұрын
Great work!
@mingwuzheng4146
@mingwuzheng4146 Жыл бұрын
Excellent idea! I'm constantly eager to explore a neural UV mapping technique like this.
@ZergRadio
@ZergRadio Жыл бұрын
Interesting!
@bolzanoitaly8360
@bolzanoitaly8360 2 жыл бұрын
what you want to show us, if you can't share the Model, then what is the need of this, even I can take this video and can place on my VLOG. this is just nothing..... can you share the model and code, please?
@21graphics
@21graphics 2 жыл бұрын
what is RGB camera?
@bobthornton9280
@bobthornton9280 2 жыл бұрын
So, I was interested in seeing if there was an accurate one of these, that I could use on episodes of LOST. Then Daniel Dae Kim showed up in this video. Nice.
@wmka
@wmka 2 жыл бұрын
Just keeps getting better and better.
@rodrigoferriz8267
@rodrigoferriz8267 2 жыл бұрын
what is the name of the software ? , and its for public use?
@virtual_intel
@virtual_intel 2 жыл бұрын
How does this benefit us viewers? and when can we gain access to the tool?
@Ethan-ny4vg
@Ethan-ny4vg 2 жыл бұрын
is the character controlor in Unity???anybody knows??thanks
@MattSayYay
@MattSayYay 2 жыл бұрын
Apparently Chills can't unsee this.
@deepfakescoverychannel6710
@deepfakescoverychannel6710 3 жыл бұрын
that is fake paper without the code.
@FancyFun3433
@FancyFun3433 3 жыл бұрын
Alright that's impressive but hows the multi camera set up? If I wanted to set up 4 cameras to capture my sides, back and front would that be possible or would it give me a shit ton of errors?? Also something that is important to me is ground work. Does this only work for videos where I have to stand up? Or can I do a front flip or back flip or crawling on the ground movements?
@foxsmith770
@foxsmith770 3 жыл бұрын
Can individuals interact/ touch each other, such as shaking hands or hugging?
@samdeleon7398
@samdeleon7398 3 жыл бұрын
Where do I get the software?
@jackcottonbrown
@jackcottonbrown 3 жыл бұрын
Can this run on an iPhone?
@AGUNGKAYA
@AGUNGKAYA 3 жыл бұрын
So great. Where s the link btw?
@abahmunifsumaryono9176
@abahmunifsumaryono9176 2 жыл бұрын
hehehh dont do it... are you coder?? ----- This is research-code for codes Code tested using tensorflow 0.11.0 Please see thesisfor the overview. To generate MFCC, first normalize the input audio using github.com/slhck/ffmpeg-normalize. Then use Sphinx III's snippet by David Huggins-Daines with a modified routine that saves log energy and timestamps: def sig2s2mfc_energy(self, sig, dn): nfr = int(len(sig) / self.fshift + 1) mfcc = numpy.zeros((nfr, self.ncep + 2), 'd') fr = 0 while fr < nfr: start = int(round(fr * self.fshift)) end = min(len(sig), start + self.wlen) frame = sig[start:end] if len(frame) < self.wlen: frame = numpy.resize(frame,self.wlen) frame[self.wlen:] = 0 mfcc[fr,:-2] = self.frame2s2mfc(frame) mfcc[fr, -2] = math.log(1 + np.mean(np.power(frame.astype(float), 2))) mid = 0.5 * (start + end - 1) mfcc[fr, -1] = mid / self.samprate fr = fr + 1 return mfcc