No video

Neural Network from Scratch | Mathematics & Python Code

  Рет қаралды 127,515

The Independent Code

The Independent Code

Күн бұрын

Пікірлер: 274
@G83X
@G83X 3 жыл бұрын
In the backward function of the dense class you're returning a matrix which uses the weight parameter of the class after updating it, surely you'd calculate this dE/dX value before updating the weights, and thus dY/dX?
@independentcode
@independentcode 3 жыл бұрын
Wow, you are totally right, my mistake! Thank you for noticing (and well catched!). I just updated the code and I'll add a comment on the video :)
@independentcode
@independentcode 3 жыл бұрын
I can't add text or some kind of cards on top of the video, so I pinned this comment in the hope that people will notice it!
@trevorthieme5157
@trevorthieme5157 2 жыл бұрын
@@independentcode Why can't you? Did the youtube developers remove that awesome function too? No wonder I've felt things have been off for so long!
@jonathanrigby1186
@jonathanrigby1186 Жыл бұрын
Can you plz help me with this .. I want a chess ai to teach me what it learnt kzfaq.info/get/bejne/hcV-ms-K1rbZZJc.html
@blasttrash
@blasttrash Жыл бұрын
just curious what happens if we propagate the updated weights backward like in the video? Will it not work? Or will it slowly converge?
@ldx8492
@ldx8492 8 ай бұрын
This video, instead of the plethora of other videos on "hOw tO bUiLd A NeUrAl NeTwOrK fRoM sCraTcH", is the literal best. It deserves 84 M views, not 84 k views. It is straight to the point, no 10 minutes explanation of pretty curves with zero math, no 20 minutes introduction on how DL can change the world I truly mean it, it is a refreshing video.
@independentcode
@independentcode 8 ай бұрын
I appreciate the comment :)
@ldx8492
@ldx8492 8 ай бұрын
@@independentcode Thank you for the reply! I am a researcher, and I wanted to create my own DL library, using yours as base, but expanding it for different optim algorithms, initializations, regularizations, losses etc (i am now just developing it on my own privately), but one day I'll love to post it on my github. How can I appropriately cite you?
@independentcode
@independentcode 8 ай бұрын
That's a great project! You can mention my name and my GitHub profile: "Omar Aflak, github.com/omaraflak". Thank you!
@robinferizi9073
@robinferizi9073 3 жыл бұрын
I like how he said he wouldn’t explain how a neural network works, then proceeds to explain it
@orilio3311
@orilio3311 3 ай бұрын
I love the 3b1b style of animation and also the consistency with his notation, this allows people to learn the matter with multiple explanations while not losing track of the core ideas. Awesome work man
@wagsman9999
@wagsman9999 Жыл бұрын
Not only was the math presentation very clear, but the Python class abstraction was elegant.
@faida.6665
@faida.6665 3 жыл бұрын
This is basically ASMR for programmers
@nikozdev
@nikozdev Жыл бұрын
I almost agree, the only difference is that I can’t sleep thinking about it
@tanker7757
@tanker7757 8 ай бұрын
@@nikozdevbruh I fall asleep and allow my self to hallucinate in math lol
@nalcow
@nalcow 5 ай бұрын
I felt relaxed definetly :D
@rubenfalvert5540
@rubenfalvert5540 3 жыл бұрын
Probably the best explaination of neural network of KZfaq ! The voice and the musique backside is realy soothing !
@jasonkhongwir1302
@jasonkhongwir1302 3 жыл бұрын
True
@generosonunezarias369
@generosonunezarias369 3 жыл бұрын
This might be the most intuitive explanation of the backpropagation algorithm on the Internet. Amazing!
@MichaelChin1994
@MichaelChin1994 2 жыл бұрын
Thank you so very, very, very much for this video. I have been wanting to do Machine Learning, but without "Magic". It drives me nuts when all the tutorials say "From Scratch" and then proceed to open Tensor Flow. Seriously, THANK you!!!
@independentcode
@independentcode 2 жыл бұрын
I feel you :) Thank you for the comment, it makes me genuinely happy.
@darshangowda309
@darshangowda309 3 жыл бұрын
This could be 3Blue1Brown for programmers! You got yourself a subscriber! Great video!
@independentcode
@independentcode 3 жыл бұрын
I'm very honored you called me that. I'll do my best, thank you !
@jumpsneak
@jumpsneak 2 жыл бұрын
+1
@quasistarsupernova
@quasistarsupernova 2 жыл бұрын
@@independentcode +1 sub
@ardumaniak
@ardumaniak Жыл бұрын
The best tutorial on neural networks I've ever seen! Thanks, you have my subscription!
@adealtas
@adealtas 2 жыл бұрын
THANK YOU ! This is exactly the video I was looking for. I always struggled with making a neural network, but following your video, I made a model that I can generalize and it made me understandexactly the mistakes I made in my previous attempts. It's easy to find on youtube videos of people explaining singular neurons and backpropagation, but then quickly going over the hard part: how do you compute the error in an actual network, the structural implementation and how it all ties together. This approach with separating the Dense layer from the activation layer also makes things 100x clearer, and many people end up smacking them both in the same class carelessly. The visuals make the intuition for numpy also much much easier. It's always a thing I struggled with and this explained why we do every operation perfectly. even though I was only looking for one video, after seeing such quality, I HAVE to explore the rest of your channel ! Great job.
@independentcode
@independentcode 2 жыл бұрын
Thank you so much for taking the time to write this message! I went through the same struggle when I wanted to make my own neural networks, which is exactly why I ended up doing a video about it! I'm really happy to see that it serves as I intended :)
@rogeliogarcia8730
@rogeliogarcia8730 2 жыл бұрын
Thanks for making such great quality videos. I'm working on my Ph.D., and I'm writing a lot of math regarding neural networks. Your nomenclature makes a lot of sense and has served me a lot. I'd love to read some of your publications if you have any.
@Leo-dw6gk
@Leo-dw6gk 12 күн бұрын
This video should be the first video you see when you search neural network.
@neuralworknet
@neuralworknet Жыл бұрын
Best tutorial video about neural networks i've ever watched. You are doing such a great job 👏
@aflakmada6311
@aflakmada6311 3 жыл бұрын
Very clean and pedagogical explanation. Thanks a lot!
@samirdaniels
@samirdaniels 2 жыл бұрын
This was the best mathematical explanation on KZfaq. By far.
@samuelmcdonagh1590
@samuelmcdonagh1590 10 ай бұрын
jesus christ this is a good video and shows clear understanding. no "i've been using neural networks for ten years, so pay attention as i ramble aimlessly for an hour" involved
@SleepeJobs
@SleepeJobs Жыл бұрын
This video really saved me. From matrix representation to chain rule and visualisation, everything is clear now.
@swapnilmasurekar5431
@swapnilmasurekar5431 2 жыл бұрын
This video is the best on KZfaq for Neural Networks Implementation!
@erron7682
@erron7682 3 жыл бұрын
This is the best channel for learning deep learning!
@_skeptik
@_skeptik 2 жыл бұрын
This is a so high quality content. I have only basic knowledge of linear algebra and being a non-native speaker I could fully understand this
@Dynamyalo
@Dynamyalo 12 күн бұрын
this has to be the single best neural network explaining video I have ever watched
@rumyhumy
@rumyhumy Жыл бұрын
Man, I love you. How many times i tried too do the multilayer nn on my own, but always faced thousand of problems. But this video explained everything. Thank you
@bernardcrnkovic3769
@bernardcrnkovic3769 2 жыл бұрын
Absolutely astonishing quality sir. Literally on the 3b1b level. I hope this will help me pass the uni course. SUB!
@nudelsuppe3dsemmelknodel990
@nudelsuppe3dsemmelknodel990 Жыл бұрын
You are the only youtuber I sincierly want to return. We miss you!
@marisakirisame659
@marisakirisame659 Жыл бұрын
This is a very good approach to building neural nets from scratch.
@user-nk8ry3xs5u
@user-nk8ry3xs5u 9 ай бұрын
Thank you very much for your videos explaining how to build ANN and CNN from scratch in Python: your explanations of the detailed calculations for forward and backward propagation and for the calculations in the kernel layers of the CNN are very clear, and seeing how you have managed to implrment them in only a few lines of code is very helpful in 1. understanding the calculations and processes, 2. demistifying the what is a black box in tensorflow / keras.
@shafinmahmud2925
@shafinmahmud2925 2 жыл бұрын
There are many solutions on the internet...but i must say this one is the best undoubtedly...👍 cheers man...pls keep posting more.
@marvinmartin1373
@marvinmartin1373 3 жыл бұрын
Amazing approach ! Very well explained. Thanks!
@black-sci
@black-sci 4 ай бұрын
best video, very clear-cut. Finally I got the backpropagation and derivatives.
@mohammadrezabanakermani2924
@mohammadrezabanakermani2924 3 жыл бұрын
It is the best one I've seen among the explanation videos available on KZfaq! Well done!
@aashishrana9356
@aashishrana9356 2 жыл бұрын
one of the best video i have ever seen. struggled alot to understand this and you have explained so beautifully you made me fall in love with the neural network which i was intimidating from. thank you so much.
@independentcode
@independentcode 2 жыл бұрын
Thank you for your message, it genuinely makes me happy to know this :)
@rishikeshkanabar4650
@rishikeshkanabar4650 3 жыл бұрын
This is such an elegant and dynamic solution. Subbed!
@anhtuanmai537
@anhtuanmai537 Жыл бұрын
I think the last row's indices of the W^T matrix at 17:55 must be (w1i, w2i,...,wji). Still the best explannation i have ever seen btw, thank you so much. I dont know why this channel is still so underrated, looking forward to seeing your new videos in the future
@independentcode
@independentcode Жыл бұрын
Yeah I know, I messed it up. I've been too lazy to add a caption on that, but I really should. Thank you for the kind words :)
@imgajeed
@imgajeed 2 жыл бұрын
Thank you, that's the best video I have ever seen about neural networks!!!!! 😀
@omegaui
@omegaui 27 күн бұрын
Such a great video. Really helped me to understand the basics.
@lucasmercier5813
@lucasmercier5813 3 жыл бұрын
Impressive, lot of information but remains very clear ! Good job on this one ;)
@tangomuzi
@tangomuzi 3 жыл бұрын
I think most of the ML PhDs dont aware of this abstraction. Simply the best.
@independentcode
@independentcode 3 жыл бұрын
I don't know about PhDs since I am not a PhD myself, but I never found any simple explanation of how to make such an implementation indeed, so I decided to make that video :)
@tangomuzi
@tangomuzi 3 жыл бұрын
@@independentcode I think you should keep going video seris and show how capable this type of abstraction. Implemnting easiliy almost every type of neural nets.
@independentcode
@independentcode 3 жыл бұрын
Thank you for the kind words. I did actually take that a step further, it's all on my GitHub here: github.com/OmarAflak/python-neural-networks I managed to make CNNs and even GANs from scratch! It supports any optimization method, but since it's all on CPU you get very quickly restricted by computation time. I really want to make series about it, but I'll have to figure out a nice way to explain it without being boring since it involves a lot of code.
@edilgin622
@edilgin622 2 жыл бұрын
@@independentcode GANs would be great also you could try to do RNNs too and maybe even some reinforcement learning stuff :D
@lowerbound4803
@lowerbound4803 2 жыл бұрын
Very well-done. I appreciate the effort you put into this video. Thank you.
@nikozdev
@nikozdev Жыл бұрын
I developed my first neural network in one night yesterday. that could not learn because of backward propagation, it was only going through std::vectors of std::vectors to get the output. I was setting weights to random values and tried to guess how to apply backward propagation from what i have heard about it. But it failed to do anything, kept guessing just as I did, giving wrong answers anyway. This video has a clean comprehensive explanation of the flow and architecture. I am really excited how simple and clean it is. I am gonna try again. Thank you.
@nikozdev
@nikozdev Жыл бұрын
I did it ! Just now my creature learnt xor =D
@cicADA001
@cicADA001 3 жыл бұрын
your voice is calming and relaxing, sorry if that is weird
@independentcode
@independentcode 3 жыл бұрын
Haha thank you for sharing that :) Maybe I should have called the channel JazzMath .. :)
@cankoban
@cankoban 2 жыл бұрын
I loved the background music. It gives peaceful mind. I hope, you will continue to make videos, very clear explanation
@Xphy
@Xphy 2 жыл бұрын
Whyyyy you don't have 3Million subscriptions you deserve it ♥️♥️
@arvindh4327
@arvindh4327 2 жыл бұрын
Only 4 video and you have avove 1k subs, Please continue your work 🙏🏼
@e.i.l.9584
@e.i.l.9584 9 ай бұрын
Thank you so much, my assignment was so unclear, this definitely helps!
@princewillinyang5993
@princewillinyang5993 2 жыл бұрын
Content at it's peak
@naheedray
@naheedray 3 ай бұрын
This is the best video i have seen so far ❤
@spritstorm9037
@spritstorm9037 Жыл бұрын
actually,you saved my life, thanks for doing these
@ANANT9699
@ANANT9699 Жыл бұрын
Wonderful, informative, and excellent work. Thanks a zillion!!
@AcceleratedVelocity
@AcceleratedVelocity Жыл бұрын
I noticed that you are using a batch size of one. make a separate Gradiant variable and ApplyGradiants function for batch sizes > 1 Note 1: also change "+ bias" to "np.add(stuff, bias)" or "+ bias[:,None] Note 2: in backpropagation, sum up the biases on axis 0 (I'm pretty sure that the axis is 0) and divide both weights and biases by batch size
@Tapsthequant
@Tapsthequant Жыл бұрын
Thanks for the tip on the biases.
@guilhermealvessilveira8938
@guilhermealvessilveira8938 10 ай бұрын
Thanks for the tip on the biases. (1)
@hossamel2006
@hossamel2006 9 ай бұрын
Can you (or someone else) please explain to me what note 1 means. Edit: As for note 2, I successfully implemented it (by summing on axis 1), so thanks for the tip.
@nahianshabab724
@nahianshabab724 7 ай бұрын
in the case of mini batch / batch gradient descent, would the input to the first layer be a matrix of ( Number_of_Features * Data_Points ) ? in that case, do I need to compute the average of the gradients in back propogation in each layer?
@hossamel2006
@hossamel2006 7 ай бұрын
@@nahianshabab724 I guess yes, I saw that in multiple videos, just add a 1/m in the MSE formula.
@filatnicolae2883
@filatnicolae2883 Жыл бұрын
In your code you compute the gradient step for each sample and update immediately. I think that this is called stochastic gradient descent. To implement full gradient descent where I update after all samples I added a counter in the Dense Layer class to count the samples. When the counter reached the training size I would average all the stored nudges for the bias and the weights. Unfortunately when I plot the error over epoch as a graph there are a lot of spikes (less spikes than when using your method) but still some spikes. My training data has (x,y) and tries to find (x+y).
@gregynardudarbe7009
@gregynardudarbe7009 Жыл бұрын
Would you be able to share the code? This is where the part where I’m confused.
@macsiaproduction7823
@macsiaproduction7823 4 ай бұрын
Thank you for really great explanation! Wish you will make even more 😉
@ti4680
@ti4680 2 жыл бұрын
Finally found the treasure. Please do more video bro. SUBSCRIBED
@chrisogonas
@chrisogonas Жыл бұрын
That was incredibly explained and illustrated. Thanks
@independentcode
@independentcode Жыл бұрын
Thank you! I'm glad you liked it :)
@chrisogonas
@chrisogonas Жыл бұрын
@@independentcode Most welcome!
@yiqiangjizhang
@yiqiangjizhang 3 жыл бұрын
This is so ASMR and well explained!
@salaheddinelachkar5683
@salaheddinelachkar5683 3 жыл бұрын
That was helpful, thank you so much.
@blasttrash
@blasttrash Жыл бұрын
amazing video. one thing we could do is to have layers calculate inputs automatically if possible. Like if I give Dense(2,8), then the next layer I dont need to give 8 as input since its obvious that it will be 8. Similar to how keras does this.
@_sarps
@_sarps 2 жыл бұрын
This is really dope. The best by far. Subscribed right away
@black-sci
@black-sci 2 ай бұрын
In tensorflow they use weight matrix W dimensions i x j then take transpose in calculation.
@RAHULKUMAR-sx8ui
@RAHULKUMAR-sx8ui Жыл бұрын
you are the best 🥺❤️..wow.. finally i able to understand the basics thanks
@vanshajchadha7612
@vanshajchadha7612 6 ай бұрын
This is one of the best videos to really understand the vectorized form of neural networks! Really appreciate the effort you've put into this. Just as a clarification, the video is considering only 1 data point and thereby performing SGD, so during the MSE calculation Y and Y* are in a way depicting multiple responses at the end for 1 data point only right? So for MSE it should not actually be using np.mean to sum them up?
@vtrandal
@vtrandal 2 жыл бұрын
Thank you! Well done! Absolutely wonderful video.
@sythatsokmontrey8879
@sythatsokmontrey8879 2 жыл бұрын
Thanks you so much for your contribution in this field.
@Gabriel-V
@Gabriel-V 2 жыл бұрын
Clear, to the point. Thank you. Like (because there are just 722, and have to be a lot more)
@ionutbosie6017
@ionutbosie6017 2 жыл бұрын
after 1000 videos watched, i think i get it now, thanks
@ramincybran
@ramincybran 4 ай бұрын
whiteout any doubt best explanation of NN ive ever seen - why you stop your productivity my friend ?
@IzUrBoiKK
@IzUrBoiKK Жыл бұрын
I would like alot if u continue your channel bro
@shivangitomar5557
@shivangitomar5557 Жыл бұрын
Amazing explanation!!
@mr.anderson5077
@mr.anderson5077 2 жыл бұрын
Keep it up .please make a deep learning and ml series for future.
@filippeczek9099
@filippeczek9099 3 жыл бұрын
Great stuff! I find it even better than the one from 3b1b. Can you think of any way the code can be checked with matrices outside the learning set?
@independentcode
@independentcode 3 жыл бұрын
Thank you! If you mean to use the network once it has trained to predict values on other inputs, then yes of course. Simply run the forward loop with your input. You could actually make a predict() function that encapsulates that loop since it will be the same for any network.
@huberhans7198
@huberhans7198 3 жыл бұрын
Very nice and clean video, keep it up
@TheAstralftw
@TheAstralftw 2 жыл бұрын
Dude this is amazing
@NoomerLoL
@NoomerLoL 10 ай бұрын
Hi there, great video, super helpful, but at 19:21 line 17 the gradient is computed with the updated weights instead of the original weights which (I believe) caused some exploding/vanishing gradient problems for my test data (iris flower dataset). Fixing that solved all my problems. If I am wrong please let me know. Note: I used leaky RELU as activation function
@gamermanv
@gamermanv 9 ай бұрын
Hello, how did you fix this issue?
@aiforchange1801
@aiforchange1801 Жыл бұрын
Big Fan of you from today !
@link6563
@link6563 Жыл бұрын
My brain is smoking i dont know what the hell is going on but that is kind of cool Keep it up
@adrianl5262
@adrianl5262 3 жыл бұрын
This is an amazing tutorial! way better than any textbook I have read. I have a question, let's say you want to convert this to a regression model, thus you don't use an activation function and instead use a dense layer as your final layer. If I want to output two regressed values should the loss be a vector of two values in order to back propagate?
@independentcode
@independentcode 3 жыл бұрын
Thank you! The loss (or the error as I call it in the video) is *always* a single number, a scalar. The output of the neural network can of course be a vector, or a matrix, or a tensor, but the error which compares the output of the neural network to the desired output will always be a single number. In order to make a regression model you don't really have to change anything to the existing code, as you said simply remove the last activation function in the network (although usually, for regressions, we use a linear activation). The XOR problem had only 1 output value, but you can have as many as you want (2 in your case). I hope this answers your question.
@adrianl5262
@adrianl5262 3 жыл бұрын
@@independentcode thanks you, wouldn't there be a matrix dimension issue since the loss/error is a scalar and the dense layer is a 2 by n matrix?
@independentcode
@independentcode 3 жыл бұрын
@@adrianl5262 Not really. The output of the neural network is passed in the loss function (here we used MSE). So for instance you would have: E=(1/2)*( (g1-y1)^2 + (g2-y2)^2) ) Where g is the desired output and y is the actual output. You can try for yourself. Change the dataset to something like: X = np.reshape([[0, 0], [0, 1], [1, 0], [1, 1]], (4, 2, 1)) // actually whatever you want here Y = np.reshape([[0,2], [1,3], [4,1], [0,0]], (4, 2, 1)) // each input is a 2x1 column vector
@snapo1750
@snapo1750 Жыл бұрын
Thank you very very much for this video....
@quantumsoul3495
@quantumsoul3495 Жыл бұрын
That is so satisfying
@zozodejante8350
@zozodejante8350 3 жыл бұрын
I love u , best ML video ever
@yakubumshelia1668
@yakubumshelia1668 Жыл бұрын
Very educational
@prem7676
@prem7676 Жыл бұрын
Awesome man!!
@erikasgrim2871
@erikasgrim2871 3 жыл бұрын
Amazing tutorial!
@nathanlove4449
@nathanlove4449 Жыл бұрын
Yeah, this is awesome
@vilmospalik1480
@vilmospalik1480 6 ай бұрын
this is a great video thank you so much
@rehelm3114
@rehelm3114 7 ай бұрын
Well done
@ShadabAlam-jz4vl
@ShadabAlam-jz4vl Жыл бұрын
Best tutorial💯💯💯💯
@lakshman587
@lakshman587 Жыл бұрын
Thank you so much for the video!!!
@solosoul2041
@solosoul2041 2 жыл бұрын
Brother. could you please do a series on neural networks basics to advanced..including cnn
@Djellowman
@Djellowman 2 жыл бұрын
Is there a practical reason why the activation functions are implemented as layers, rather than the other layers, such as Dense, taking the activation function as an argument & applying it internally?
@independentcode
@independentcode 2 жыл бұрын
Yes, for simplicity. If you apply the activation inside the layer, then that layer will also have to account for the activation during backward propagation. And the Dense layer is not the only layer that might use an activation, so will you implement it in every such layer? That's why it's a separate thing
@Djellowman
@Djellowman 2 жыл бұрын
@@independentcode That's a good point. Although i suppose you could implement the activation function handling for both forward and backwards propagation in the base Layer class, right? I'm asking this because I started working on a project where I build a Dense neural net to classify some data, but I decided I might as well build a little neural net library. Your video made me think about creating a better design. I first passed the architecture of the network as a list of layer_lengths to a DenseNeuralNet class. I prefer your design of making a base Layer class that will function as an abstract base class, and specifying separate layer objects, as it's more modular than my initial design.
@Logon1027
@Logon1027 3 жыл бұрын
Really great video, it reminds me of 3blue1brown style. I plan on revisiting it tomorrow. What software do you use to create these videos/graphics.
@independentcode
@independentcode 3 жыл бұрын
Thanks. It actually is the animation library from 3b1b, the link is in the description :)
@OmkarKulkarni-wf7ug
@OmkarKulkarni-wf7ug 4 ай бұрын
How output gradient is calculated and passed into the backward function?
@cw9249
@cw9249 Жыл бұрын
shouldnt the gradient of the input vectors involve some sort of sum along one axis of the weights matrix? for example if your input vector is shape (2,1) and your weights matrix is (4,2), you do np.matmul(weights,input) and create a vector of shape (4,1), the gradient of the input vector with respect to the output vector is the weights matrix summed along axis 0
@davidmurphy563
@davidmurphy563 Жыл бұрын
Tanh is one of those things that sounds great in principle but in practice people use linear. Frankly I just clamp it these days. Curves kill the spread, clump it all up and lines work fine. They're close enough to curves.
@leandrofrutuoso985
@leandrofrutuoso985 3 жыл бұрын
This indeed is the better explanation of the math behind the neural networks I've found on the internet, could I please use your code on github in my final work for college?
@independentcode
@independentcode 3 жыл бұрын
Thank you for the kind words! Other videos are coming up ;) Yes of course, it is completely open source.
@breakfastwithdave8933
@breakfastwithdave8933 2 жыл бұрын
Is there a way to train the model, save the training network and use it to make predictions? If you can explain would be very much appreciated. Great video by the way!!
@pointofinsanitydev169
@pointofinsanitydev169 2 жыл бұрын
I implemented saving and loading with pickle
@AynurErmis-vp9lq
@AynurErmis-vp9lq 4 ай бұрын
BEST OF BEST THANK YOU
@areegfahad5968
@areegfahad5968 Жыл бұрын
Amazing!!
@SoftwareDeveloper2217
@SoftwareDeveloper2217 4 ай бұрын
It is the best It is the Beauty because the explanation is great
@seungsooim2183
@seungsooim2183 2 жыл бұрын
Hey. Amazing video man. I just have one question about the mnist Convolutional file in your github. So you only trained your neural network to identify 2 out of 10 classes ie. 0 and 1 but if I were to extend it to all 10 classes with 100 cases each when I preprocess the data, would all the dimensions in your original network layer be the same? like the Reshape and Dense layers? Basically, I was wondering if there is a way I can use the convolutional layer that would account for all classes in the dataset.
@independentcode
@independentcode 2 жыл бұрын
Hey, thanks! Sure, all you have to do is make the last dense layer of the network output the number of classes you want to predict. For now it is 2 (0 and 1). Make it 10 to handle all classes, but also don't forget to add the other classes to the training data in the preprocess_data function.
@seungsooim2183
@seungsooim2183 2 жыл бұрын
@@independentcode really sorry. I have another question. in the preprocess function can i initialize an empty list and then use a for loop: for i in range(10): current = np.where(y==i)[0][:limit] and then on the next line i can write, indices = indices + current ?? im not exactly sure how np.hstack works, it seems that i need to actually have 10 different lists and apply hstack to them all at once. i apologize again for bombarding you.
@seungsooim2183
@seungsooim2183 2 жыл бұрын
also. do you offer tutoring? I think I might purchase your books on machine learning with python and implementing algorithms from scratch. the main issue i have aside from the math is array dimensions for scaling/fitting. its actually my only concern, for any method or function of sklearn and keras, i always get stuck on preprocessing because dimensions are not correct. if you have any resource you can refer me to id greatly appreciate that. or if you offer tutoring id be more than happy to schedule a session.
@seungsooim2183
@seungsooim2183 2 жыл бұрын
@@independentcode actually i got it. i tried exactly what i suggested initializing an empty list then appending every class of first100 observations and then applied hstack lol
Convolutional Neural Network from Scratch | Mathematics & Python Code
33:23
The Independent Code
Рет қаралды 170 М.
How to Create a Neural Network (and Train it to Identify Doodles)
54:51
Sebastian Lague
Рет қаралды 1,8 МЛН
Survive 100 Days In Nuclear Bunker, Win $500,000
32:21
MrBeast
Рет қаралды 160 МЛН
Bony Just Wants To Take A Shower #animation
00:10
GREEN MAX
Рет қаралды 7 МЛН
Jumping off balcony pulls her tooth! 🫣🦷
01:00
Justin Flom
Рет қаралды 38 МЛН
🩷🩵VS👿
00:38
ISSEI / いっせい
Рет қаралды 18 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 382 М.
10 weird algorithms
9:06
Fireship
Рет қаралды 1,2 МЛН
I programmed some creatures. They Evolved.
56:10
davidrandallmiller
Рет қаралды 4,1 МЛН
Understanding AI from Scratch - Neural Networks Course
3:44:18
freeCodeCamp.org
Рет қаралды 319 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,2 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 933 М.
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 276 М.
I Built a Neural Network from Scratch
9:15
Green Code
Рет қаралды 249 М.
AI Learns to Speedrun Mario
8:07
Kush Gupta
Рет қаралды 775 М.
Survive 100 Days In Nuclear Bunker, Win $500,000
32:21
MrBeast
Рет қаралды 160 МЛН