Machine Learning in C (Episode 1)

  Рет қаралды 210,620

Tsoding Daily

Tsoding Daily

Күн бұрын

Chapters:
- 00:00:00 - Intro
- 00:01:21 - What is Machine Learning
- 00:03:03 - Mathematical Modeling
- 00:08:15 - Plan for Today
- 00:10:32 - Our First Model
- 00:12:24 - Training Data for the Model
- 00:17:05 - Initializing the Model
- 00:19:52 - Measuring How Well Model Works
- 00:27:56 - Improving the Cost Function
- 00:32:27 - Approximating Derivatives
- 00:41:25 - Training Process
- 00:45:59 - Artifician Neurons
- 00:50:11 - Adding Bias to the Model
- 00:56:16 - More Complex Model
- 00:58:41 - Simple Logic Gates Model
- 01:06:04 - Activation Function
- 01:15:24 - Troubleshooting the Model
- 01:25:04 - Adding Bias to the Gates Model
- 01:27:36 - Plotting the Cost Function
- 01:29:28 - Muxiphobia
- 01:31:43 - How I Understand Bias
- 01:33:20 - Other Logic Gates
- 01:36:13 - XOR-gate with 1 neuron
- 01:38:46 - XOR-gate with multiple neurons
- 01:49:14 - Coding XOR-gate model
- 01:57:53 - Human Brain VS Artificial Neural Network
- 02:00:26 - Continue coding XOR-gate model
- 02:15:14 - Non-XOR-gates with XOR Architecture
- 02:18:30 - Looking Inside of Neural Network
- 02:24:57 - Arbitrary Logic Circuits
- 02:27:23 - Shapes Classifier
- 02:29:42 - Better Representation of Neural Networks
- 02:30:36 - Outro
- 02:30:50 - Smooch
References:
- github.com/tsoding/perceptron
- Notes: github.com/tsoding/ml-notes
Support:
- BTC: bc1qj820dmeazpeq5pjn89mlh9lhws7ghs9v34x9v9

Пікірлер: 297
@alexgodson4176
@alexgodson4176 Жыл бұрын
"It takes a genius to make something complex sound so simple", Thank you for teaching so well
@Acetyl53
@Acetyl53 Жыл бұрын
I disagree. This notion is why everything has turned into edutainment. Terry had a lot to say about that too.
@Acetyl53
@Acetyl53 Жыл бұрын
@Cody Rouse I agree with my disagreement and disagree with your disagreement with my disagreement.
@SimGunther
@SimGunther Жыл бұрын
The actual quote was "Every genius makes it more complicated. It takes a super-genius to make it simpler."
@samgould8567
@samgould8567 Жыл бұрын
@@Acetyl53 Given two people with identical knowledge of a subject, the person who can explain it more thoroughly and understandably to a layperson either has elevated communication abilities, or has a deeper understanding beyond what can easily be measured. In either case, they are demonstrably smarter in what we care about.
@mar4ko07
@mar4ko07 Жыл бұрын
If you can explain topic to 5 year old, that means you understand topic.
@chjayakrishnajk
@chjayakrishnajk 18 күн бұрын
Generally I can't make myself sit and watch your videos entirely because I don't know what you're doing especially C videos, but today I saw this entire video mainly because of how simply you explained it
@rterminatu
@rterminatu Жыл бұрын
It's much more interesting to learn machine learning like this than to just use some pre-made library I'm far more interested in the underlying mathematics and algorithms than just some 'cat food' approach to learning where we just get a brief overview of how to use some preexisting technology. The mathematics and algorithms are interesting and worth learning especially if you want to be innovative in any field. While it might not be an 'expert' example is an intuitive explanation which is as in depth if not more so than at the universe level AI course which I have taken. Thanks for the great content!
@alextret6787
@alextret6787 Жыл бұрын
Редчайший контент на ютубе. Чистый C, даже не c++. Очень круто
@samwise1491
@samwise1491 Жыл бұрын
1:32:27 The bias is needed because otherwise all your inputs were zero, no matter what your weights were. Y was being calculated as 0*w1 + 0*w2 and then passed through the sigmoid, which S(0) = 0.5. Adding the bias allowed it to provide a non zero input to the sigmoid in that case Of course this is an old stream so I'm sure you figured that out later, just in case anyone watching was curious, great stuff as always Zozin!
@ashwinalagiri-rajan1180
@ashwinalagiri-rajan1180 Жыл бұрын
A simpler explanation is that some times you want to move the entire curve rather than just changing the slope of the curve.
@AD-ox4ng
@AD-ox4ng Жыл бұрын
Another simple explanation is that when our inputs are in a range of values between A and B, say like BMI values which in most standard cases are between 16-30 or so, it's helpful to standardize the range to something between 0 and 1. The weights are multiplied/divided by the inputs to scale them up or down. We could divide the upper BMI boundary value by 30 to get 1. However, when we divide the lower BMI boundary by 30, we don't get 0. In fact, no matter what number we choose, we can not bring the lower boundary to 0 by multiplication alone. This is because there is a "bias" in the range (or an offset or **addition**) on the range. The bias term is that extra addition/subtraction needed so that we first make sure that the range starts at 0. Then we do the scaling to make it range from 0 to 1.
@ElGnomistico
@ElGnomistico Жыл бұрын
I remember from my machine learning classes that the bias term comes from the idea of having a threshold value for the activation. Instead of writing an inequality, you would subtract the threshold from the usual perceptron's linear function (W • x - threshold). The bias is just the negative threshold for mathematical convenience. In fact, the bias can also be thought of as a weight whose input is always 1 (helps understand why you also updated the bias the same as you do with the rest of the weights).
@liondovelearning1391
@liondovelearning1391 Жыл бұрын
The 🎉 did I just read?
@Mr4NiceOne
@Mr4NiceOne Жыл бұрын
Easily the best introduction to machine learning. Thank you for taking your time to make these!
@bossysmaxx3327
@bossysmaxx3327 11 ай бұрын
I was waiting for this tutorial for like 7 years finally someone made it, good dude subscribed
@filipposcaramuzza2953
@filipposcaramuzza2953 11 ай бұрын
About the XOR thing. The way I understood it, is that with a single neuron you can only model a linear equation, i.e. a line. If you try to plot in a 2D graph the inputs for the OR function, that is putting a "0" in the coordinates (0, 0) and "1" in the coordinates (0, 1), (1, 0), (1, 1), you can clearly see that you can "linearly separate" 0s and 1s outputs (this means drawing a line that separates them). If you try to plot the XOR function, instead, you won't be able to linearly separate 0s and 1s on a 2D plot with a single line, but you will need a more complex model, e.g. two neurons. Moreover, the weights can be seen as the angular coefficient of the line and the bias as the intercept.
@blackhaze3856
@blackhaze3856 Жыл бұрын
This man is the bible of the programming field.
@danv8718
@danv8718 11 ай бұрын
As far as I understand it, the reason for using the square, instead of for example the absolute value, is that apart from giving you always positive values so they won't cancel out when you add them up, the square function has some nice properties in terms of calculus, for example, the derivative exists everywhere (this is not the case for the absolute value) and this can be important for implementing algorithms like gradient descent. The reason can't be to amplify any error, even if it's very small because if it's indeed close to zero, and you square it, instead of amplifying it, you'd make even much smaller! But anyway, this was a thoroughly enjoyable intro to ML.
@artemfaduev6228
@artemfaduev6228 7 ай бұрын
I am a ml engineer and you are absolutely correct. The main reason to use square instead of modulus is to take derivatives from any point given in order to calculate and preform a gradient descent, which is used for model optimization. But there are some downsides to it. For example, it really bumps up the error. Imagine you are calculating prices of apartments based on some features provided to you. If error is 1`000$, you will ramp it up to whopping 1`000`000$. That means your model will be affected more by the outliers in your training data and model will be trying to compensate the damage of outlying squared values. That is why ML-engineers often have to make a choice between MSE (mean squared error) or MAE (mean absolute error). If you need more optimization and there are no obvious outliers - pick the first one. If there are a lot of outliers in data, pick MAE to make your model less "emotional" if you could say so :)
@burarum1
@burarum1 6 ай бұрын
@@artemfaduev6228 MSE and MAE are not the only loss functions that exists. MSE/L2 loss means that we assume gaussian noise for the data. Instead of gaussian noise we could use student-t distribution as the noise distribution and use the negative log likelihood (differentiable everywhere) of that as the loss. Student-t has heavier tails (with controllable hyperparameter nu) -> more robust to noise. Also there is something like huber loss.
@gabrielmartinsdesouza9078
@gabrielmartinsdesouza9078 Жыл бұрын
I've passed the last 7 hours writing code, "following " this video, thanks a lot for this.
@potatopassingby1148
@potatopassingby1148 Жыл бұрын
Zozin, you are a wonderful teacher for anything Computer Science related :) you are teaching in a way that actually helps people understand things, so thank you for your videos. If only Universities had people like you to teach
@hc3d
@hc3d Жыл бұрын
Indeed. This is the best ML explanation I have seen so far, finally things are making sense.
@rubyciide5542
@rubyciide5542 8 ай бұрын
Bro this dudes brain is definitely something else
@0ia
@0ia 6 ай бұрын
Is he called Zozin? I thought that was him pronouncing Tsoding with an accent
@RoadToFuture007
@RoadToFuture007 6 ай бұрын
@@0ia I've heard his name is Alexey.
@teenspirit1
@teenspirit1 8 ай бұрын
01:38:00 @Tsoding Daily The reason you couldn't model XOR with a single neuron is because xor requires a non-linear classifier to separate the two cases. If you adjust them in a 2x2 matrix you can see why: AND: (instead of writing [0 0 0 1]) [0 0 0 1] you can draw a line separating 0s and 1 OR: [0 1 1 1] again, the classifier only needs a straight line XOR: [0 1 1 0] you need some sort of oval shape, a line isn't enough to classify xor.
@Anonymous-fr2op
@Anonymous-fr2op 3 ай бұрын
Yeah, equation of an ellipse maybe?
@SlinkyD
@SlinkyD Жыл бұрын
Watching you made me realize that understanding the definitions and concepts is perhaps the most important part of programming. The second important part is distilling a high level concept down to its base components is close behind. Third is typing. My knuckles hurt from all the vids I watched. Now I wanna watch parametric boxing (all techinique while blindfolded).
@nist7783
@nist7783 Жыл бұрын
I learned a lot of concepts watching this, it's a very detailed and awesome explanation, thanks you.
@AnnasVirtual
@AnnasVirtual Жыл бұрын
you should try to model continuous function like sin, cos, or even perlin noise and see if the neural network can act like it
@patrickjdarrow
@patrickjdarrow Жыл бұрын
I’ve done this with neural networks. In short, the common neural network with ReLU activations will look like a piecewise function with linear characteristics at the boundaries. In practice this is avoided with sinusoidal output encoding which makes the issue of sinusoid approximation trivial
@ashwinalagiri-rajan1180
@ashwinalagiri-rajan1180 Жыл бұрын
@@patrickjdarrow so you approximated sine with a sine 🤣🤣🤣
@patrickjdarrow
@patrickjdarrow Жыл бұрын
@@ashwinalagiri-rajan1180 No, I said that's what's done in practice
@ashwinalagiri-rajan1180
@ashwinalagiri-rajan1180 Жыл бұрын
@@patrickjdarrow yeah ik i was just joking
@pauleagle97
@pauleagle97 11 ай бұрын
Уровень годноты контента зашкаливает, спасибо! Подписался
@The_Savolainen
@The_Savolainen 11 ай бұрын
Well this is cool! By just using mathematics and the power of computer you build something that was able to predict the next number (even when it was just that the next number 2 times the input) and also something that can recognise logic gates is just mind bogling. And only with 1-3 neurons. I was very interested about this topic before this video and now i am hooked!
@darkdevil905
@darkdevil905 Жыл бұрын
I have a degree in Physics and i have a feeling you deeply understand mathematics better than i do lol. Best method for sure is central difference method but it doesnt matter your way of teaching and problem solving absolutely rocks and is the best
@klnnlk1078
@klnnlk1078 Жыл бұрын
The mathematics needed to understand what a neural network does is extremely elementary.
@darkdevil905
@darkdevil905 Жыл бұрын
@@klnnlk1078 true
@user-jl6ix6ek9o
@user-jl6ix6ek9o Жыл бұрын
Like basics calculus and linear algebra
@scarysticks66
@scarysticks66 9 ай бұрын
Not actually, if you go deep to the convolutional nn or other architectures, the math needed there are pretty advanced, like tensor calculus @@klnnlk1078
@Amplefii
@Amplefii 6 ай бұрын
@@klnnlk1078 Well im bad at math so it seems complicated to me but i still love to learn about it. Need to find the time to study some math the American school system didn't do me any favors.
@user-br6ku7jj6n
@user-br6ku7jj6n Ай бұрын
I've been trying to understand ML basics for a while now, and I don't know why but it finally clicked. Thank you!
@alexandersemionov5790
@alexandersemionov5790 11 ай бұрын
And some people pay for their degree without seeing the fun of exploration. This was really fun and degree worthy material
@margarethill801
@margarethill801 3 ай бұрын
THANK YOU SO MUCH FOR THIS TUTORIAL!!! I have learnt so much, your explanation and reasoning is very insightful - delivery on subject matter EXCELLENT and humorous :)
@johanngambolputty5351
@johanngambolputty5351 Жыл бұрын
The nice thing about doing this in c is that you could use OpenCL (c98 syntax) to parallelise certain operations on gpu cores (like you were using a thread pool) without really changing much of the logic (so long as you're not using some language features that aren't available like function pointers).
@llothar68
@llothar68 Ай бұрын
Madness, you want to use libraries for this.
@simonwagner1426
@simonwagner1426 Жыл бұрын
Damn, I just clicked at this video for fun, but got very much hooked! You‘re doing a great job at making these videos entertaining!🙌🏻
@byterbrodTV
@byterbrodTV Жыл бұрын
Awesome video and very interesting topic! Can't wait to see the next episode. Next stream "4-bit adder" - i see where we going. We slowly approach to build an entire cpu through neural network :D I like the way you explain complicated things from the ground. I love it. It's worth a lot. Thank you!
@kirkeasterson3019
@kirkeasterson3019 7 ай бұрын
I appreciate you using a fixed seed. It made it easy to follow along!
@ammantesfaye
@ammantesfaye 9 ай бұрын
this is pretty brilliant to understand neural nets from first principles thanks tsoding
@arkadiymel5987
@arkadiymel5987 Жыл бұрын
1:37:35 I think it is possible to model XOR with just one neuron if its activation function is non-monotonic, such as a sine function
@landsgevaer
@landsgevaer Жыл бұрын
Yeah, essentially a xor b = (a+b) mod 2 suffices, if false=0 and true=1. There, a+b is the linear combination, and mod is the periodic function. Similarly works with sin and cos; and, up to scaling, any activation function with multiple zeros (except the zero constant function).
@revimfadli4666
@revimfadli4666 Жыл бұрын
Or if you use a cascaded network with skip connections Or an Elman-Jordan network
@daviskipchirchir1357
@daviskipchirchir1357 11 ай бұрын
@@landsgevaer explain this more. I want to see something
@landsgevaer
@landsgevaer 11 ай бұрын
@@daviskipchirchir1357 I think I wrote it clearly enough, if I say so myself. It doesn't get more precise than when written in a formula. Not sure what the something is that you want to see, but thank you.
@potatopassingby1148
@potatopassingby1148 11 ай бұрын
a more intuitive way of understanding when XOR "stagnates at 0.25" is pretty much because the neural network is able to model up to 3 of the 4 states that we want it to be able to model. After being able to model 3 of those states, it absolutely cannot model the 4th one due to it's limitations of how it was built, so that last one takes up 25% of all the scenarios of an XOR gate that we want it to be able to imitate :D so at best it will still have cost 25% (or accuracy 75%)
@blacklistnr1
@blacklistnr1 Жыл бұрын
1:09:17 activation functions have the main purpose of being non-linear (have weird shapes) Because if you add lines with lines you get more lines, so your 1 trillion deep neuronal network is just as effective as your last brain cell. So with something like ReLU(which is a glorified if) you can have a neuron light up for specific inputs, then in turn trigger other specific neurons to build complexity with every layer.
@michaeldeakin9492
@michaeldeakin9492 Жыл бұрын
fp arithmetic for adding lines is also non-linear: kzfaq.info/get/bejne/d8tpeK503q-VqIk.html in case you haven't seen it, the whole talk is amazing and built on this
@antronixful
@antronixful Жыл бұрын
nice explanation... the square is used because the variance has units of "stuff we're measuring"², because if your error is eg .8, it'll not be amplified when squaring it
@neq141
@neq141 Жыл бұрын
I think that he will be making us in c in the next episode, he is becoming too powerful!
@rngQ
@rngQ Жыл бұрын
fool, he has already constructed this entire timeline in C
@neq141
@neq141 Жыл бұрын
@@rngQ bold of you to assume only _this_ timeline
@CodePhiles
@CodePhiles 11 ай бұрын
this was really amazing, I learn a lot from you, thank you for your all insights.
@rogo7330
@rogo7330 Жыл бұрын
Every word at this channel I'm taking with cup of tea.
@valovanonym
@valovanonym Жыл бұрын
Wow I commented about this a few weeks ago. I guess it was recorded before but I'm happy to see this vod, I can't wait to have seen it and the next one too.
@v8metal
@v8metal Жыл бұрын
absolutely genius. amazing stuff. thanks for sharing ! *subscribed*
@hc3d
@hc3d Жыл бұрын
"I'm ooga booga software developer (...)" 🤣🤣 23:24 But in all seriousness, great intro explanation at the beginning. Edit: After watching the whole thing, this was the best ML explanation I have ever seen. Looking forward to the next video.
@TransformationPradeepBhardwaj
@TransformationPradeepBhardwaj 11 ай бұрын
keep adding videos , you r doing great job for C lovers
@LeCockroach
@LeCockroach 3 ай бұрын
Super useful info, I learned a lot! Thanks for sharing
@StevenMartinGuitar
@StevenMartinGuitar Жыл бұрын
Great! Looking forward to more in the series
@torphedo6286
@torphedo6286 Жыл бұрын
Thanks dude, I tried to write something for AI in C a while ago but it was incredibly difficult because all of the information is Python. Saving this video for later.
@eprst0
@eprst0 Жыл бұрын
Try it using c++ 😄
@AD-ox4ng
@AD-ox4ng Жыл бұрын
Couldn't you just follow along one of the ML implementation videos in pure python and just translate the code?
@torphedo6286
@torphedo6286 Жыл бұрын
@@AD-ox4ng I wasn't aware of any pure python implementation videos, all the info I saw used numpy. Re-implementing matrices while trying to understand the math at the same time sucked.
@alefratat4018
@alefratat4018 Жыл бұрын
@@andrewdunbar828 There are a lot of deep learning libraries / frameworks in C that are relatively simple and less heavyweight than the mainstream frameworks in Python.
@hedlund
@hedlund Жыл бұрын
Oh hell yeah, dat timing though. I started doing precisely this just yesterday evening, and in C rather than my go-to C++, at that. Got stuck with backpropagation so I decided to spend tonight doing some more reading. Said reading has now been moved to tomorrow evening :) Edit: Oh, and side-note; have you read The Martian? A quote came to mind. "So it's not just a desert. It's a desert so old it's literally rusting".
@grantpalmer5446
@grantpalmer5446 Жыл бұрын
for some reason Ive tried starting and am already stuck aroun the 20 minute mark, I can’t get it to generate a random number 1-10 for some reason? I am using the same code and am confused why it isn’t working, has anyone encountered this problem?
@usptact
@usptact 11 ай бұрын
As someone with degree in machine learning (before it was cool), I found this amusing. Keep it up.
@zedeleyici.1337
@zedeleyici.1337 4 ай бұрын
you are such an amazing teacher
@arjob
@arjob 5 ай бұрын
Thank you from my heart.
@TheSpec90
@TheSpec90 3 ай бұрын
49:40 Bias is important to prevent the "bias" to fit in the entire training set, for example the over fitting, it's important to have the bias term so we can avoid many classical overfitting problems, so as far I know the bias is a just a parameter created to avoid this issues and like you said improve the model training. EDIT: The correct therm to avoid the overfit and also the underfit is called regularization therm, and arrives when we split the data into training set and test set (to validate our model) and see that with this therm we can get fastter the correct model (for largers datasets and complex models)
@abi3135
@abi3135 2 ай бұрын
2:23:58 That's cool, the model found out this seemingly random configuration that after doing the final boolean simplification gives us x ^ y "OR" neuron actually does --> x & y "NAND" neuron does --> x | y "AND" neuron does --> (~x) & y so after forward-feeding --> (~(x & y)) & (x | y) actually simplifies to x ^ y in the end
@sohpol
@sohpol 3 ай бұрын
This is wonderful! More!
@hamzadlm6625
@hamzadlm6625 Жыл бұрын
Nope I didn't expect that either, but great work zuzin thank you for the efforts to explain these concepts
@mr_wormhole
@mr_wormhole Жыл бұрын
he is using clang to compile, he just earnt a follower!
@mihaicotin3261
@mihaicotin3261 Жыл бұрын
Hey! Can you try to do your own build a kernel series! ? Would be a cool thing! Maybe the comunity will Love it . Keep up the good work!❤
@monisprabu1174
@monisprabu1174 10 ай бұрын
thank youuuuuu, always wanted a c/c++ machine learning tutorial since python is slow at everything
@RahulPawar-dl9wn
@RahulPawar-dl9wn 11 ай бұрын
Awesome video, superbly making all concepts simple to understand. 👌🏻 I'm following through using termux on my Android phone 😅
@cr4zyg3n36
@cr4zyg3n36 9 ай бұрын
Love your work. I was looking for some one who is a die hard c fan.
@TsodingDaily
@TsodingDaily 9 ай бұрын
I absolutely hate C
@deniskhakimov
@deniskhakimov 6 ай бұрын
@@TsodingDaily dude, the more you hate something, the more you like it. I mean, you can't hate someting you don't care about. So you were interested in it enough to start hating it 😂
@Czeckie
@Czeckie Жыл бұрын
the lore deepens. degree in ferrous metallurgy, really?
@TsodingDaily
@TsodingDaily Жыл бұрын
It was more of a CS in Ferrous Metallurgy. It was the closest to CS option near me lol.
@Cemonix
@Cemonix Жыл бұрын
Couple of days ago I was programming feedforward neural network from scratch in python and I have to say that it was painful and interesting
@superkimsay
@superkimsay Жыл бұрын
Holy cow. This is good stuff!
@arslanrozyjumayev8484
@arslanrozyjumayev8484 Жыл бұрын
My man literally has a degree in rust! Must watch content 💯!!!!!!
@hbibamrani9890
@hbibamrani9890 Жыл бұрын
This guy is a real hero!
@themakabrapl2248
@themakabrapl2248 Жыл бұрын
for the 4 bit adder you would need 5 output neurones because if you add 1111 and 1111it's just gonna overflow to 0001 1110 so it won't work correctly
@wagsman9999
@wagsman9999 Жыл бұрын
If you ever find yourself in middle America, I would love to buy you lunch!
@fabian9300
@fabian9300 3 ай бұрын
Quite nice intro to Machine Learning in C, but there's something you missed during the explanation: One does not square the error to amplify it but because we want to calculate the euclidian distance for the error, otherwise if our model's f(xᵢ) was superior to the actual observed value of f(xᵢ) the cost function for xᵢ would return a negative value
@diegorocha2186
@diegorocha2186 Жыл бұрын
As a Brazilian, I'm glad to know english so I have access to this kind of content!!!! Amazing stuff as usual Mr Uga Buga Developer!
@konigsberg72
@konigsberg72 11 ай бұрын
O cara é muito bom mesmo
@TheSpec90
@TheSpec90 4 ай бұрын
You teach in 2 hours for free much more than 99% of people selling courses out there
@yamantariq
@yamantariq 22 күн бұрын
Could someone tell what vim motion he uses at 18:00 I just don't understand. I am talking about how he flips "rand_max" to max_rand" in seemingly one click Thanks in advance
@nofeah89
@nofeah89 Жыл бұрын
Such an interesting stream!
@dewijones92
@dewijones92 4 ай бұрын
LOVE this
@TheChucknoxus
@TheChucknoxus Жыл бұрын
Loved this video. We need more content like this looking under the hood and removing all the hocus pocus.
@fiendishhhhh
@fiendishhhhh Жыл бұрын
this is amazing dude.
@mamamia5668
@mamamia5668 Жыл бұрын
Great stuff, was very entertaining :)
@shotasdg3679
@shotasdg3679 Жыл бұрын
I love the way you talk
@bhavik_1611
@bhavik_1611 Жыл бұрын
😶😶i was looking for ml in c/c++ , , , will learn this as soon as i finish my uni exams,,, please consider continuing this series (⁠^⁠^⁠).
@GlobalYoung7
@GlobalYoung7 Жыл бұрын
Thank you 👍
@x_vye
@x_vye Жыл бұрын
Just as I was wondering if you had any new uploads, not even 1 minute ago!
@paolomonni8333
@paolomonni8333 Жыл бұрын
Awesome video
@Fnta_discovery
@Fnta_discovery Жыл бұрын
Hi I have a question is it difficult to understand AI using c/C++ which method do you recommend to me !!
@MACAYCZ
@MACAYCZ Жыл бұрын
Hi, can I ask you, what font are you using?
@ahmedfetni9349
@ahmedfetni9349 Жыл бұрын
Thank you
@alib5503
@alib5503 Жыл бұрын
1:29:05 I think you can write your result to a file and gnuplot has an option that let's you plot in real time
@dimak76
@dimak76 6 ай бұрын
just curious, could we just call sigmoid function once like sigmoid(forward()) within cost() function, instead of calling it the time inside forward() function?
@nunoalexandre6408
@nunoalexandre6408 Жыл бұрын
Love it!!!!!!!!!!
@viacheslavprokopev8192
@viacheslavprokopev8192 Жыл бұрын
ML/AI is a function approximation basically.
@elonmed
@elonmed 11 ай бұрын
amazing ❤
@godsiom
@godsiom Жыл бұрын
yes, ML && tscoding ! let’s go!!
@estervojkollari4264
@estervojkollari4264 10 ай бұрын
Amazing lesson. I get the intuition behind it but what is the explanation for substracting dcost from w?
@ilovepeaceandplaying8917
@ilovepeaceandplaying8917 Жыл бұрын
omg, This is amazing
@tsilikitrikis
@tsilikitrikis Жыл бұрын
You are gold man
@RowdyRockerProudDad1985
@RowdyRockerProudDad1985 Жыл бұрын
thank you mr. tsode
@dmytro.sereda
@dmytro.sereda Жыл бұрын
Have author looked into Odin lang? Maybe he mentioned it somewhere. I saw he used Jai before, which is a kinda elder brother of Odin. If not - how to ask Alexey to take a look at it?
@muhamadrichardmenarizki5941
@muhamadrichardmenarizki5941 9 ай бұрын
what IDE and Compiler you use?
@BboyKeny
@BboyKeny 11 ай бұрын
Human neurons are also activated concurrently doing different thing in different parts of the brain. It's not a number crunching machine only concerned with processing to get to 1 answer for 1 query.
@Avighna
@Avighna Жыл бұрын
50:02 the bias is indeed an extremely necessary thing, let's say your training data was not y = 2x but y = 2x + 1 (so 0 ->1, 1 -> 3, 2 -> 5, ...), then no matter how close to '2' your weight was, it would never be enough to reach a near zero error unless you added on the bias (of 1 in this case). The bias is extremely useful in all ways! Great explanation though, and I'm not holding this against you since you openly admitted to not knowing much.
@sukina5066
@sukina5066 5 ай бұрын
43:07 relatable 1:26:11 loving the journey...
@yairlevi8469
@yairlevi8469 Жыл бұрын
"Ooga Booga, Brute Force" - Tsoding, 2023
@Duck.Sensei
@Duck.Sensei 5 ай бұрын
What terminal and IDE you are using? I like that custom text at the bottom, I want it too 🤣. (1:04:22 I now understand that it is Emacs. But if someone knows how to make custom text at the bottom bar, I would appreciate the help)
@actualBIAS
@actualBIAS Жыл бұрын
Minute 33:00 lol Shit is starting making sense if you get into that field. Dude, look into neural topology and circuitry. You will love that. By the way, I'm enjoying your content so far. Keep it up bro.
@michalbotor
@michalbotor Жыл бұрын
they picture that you've found was excellent: it showed that distinction between weights and bias is artificial, bias is just another weight. i mean to be truthful, bias has a different purpose than weights; weights control influence of the inputs, bias controls output with the inputs fixed, it shifts the output. but as far as computation goes y = w*x + b is the same as y = w*x + b*1 = [w; b] * [x; 1] = w' * x'. this is super important as gpus are optimized for matrix multiplication and y = w*x is in the form of matrix multiplication.
@michalbotor
@michalbotor Жыл бұрын
btw. as far as i understand the reason why we do y = f(w*x) instead of just y = w*x is because the latter is linear i.e. the output is linearly proportional to the input. and not all systems that we want to model are linear, hence the input is funneled through a nonlinear function f to make it nonlinear.
@michalbotor
@michalbotor Жыл бұрын
btw. don't understand me wrong, but you would be the best teacher for the future engineers and scientists that i can imagine. i am quite old, but when i see you code, i have this sense of learning through curios discovering, rather than learning by heart. you actually make me wanna learn, and for that i am very much grateful.
@noctavel
@noctavel Жыл бұрын
tsoding: "im gonna create a neural network without all this matrixes bullshit" 12:38 - tsoding, line 3
@debajyatidey9468
@debajyatidey9468 Жыл бұрын
We want more episodes on this topic
@OmerFarukGonen
@OmerFarukGonen Жыл бұрын
I hope you can do this for the rest of your live
Making a New Deep Learning Framework (ML in C Ep.02)
3:07:01
Tsoding Daily
Рет қаралды 58 М.
Master Pointers in C:  10X Your C Coding!
14:12
Dave's Garage
Рет қаралды 272 М.
ДЕНЬ РОЖДЕНИЯ БАБУШКИ #shorts
00:19
Паша Осадчий
Рет қаралды 4,5 МЛН
SHE WANTED CHIPS, BUT SHE GOT CARROTS 🤣🥕
00:19
OKUNJATA
Рет қаралды 14 МЛН
2D water magic
10:21
Steve Mould
Рет қаралды 361 М.
Why is Raylib becoming so popular?
9:24
Chris_PHP
Рет қаралды 13 М.
Bayes theorem, the geometry of changing beliefs
15:11
3Blue1Brown
Рет қаралды 4,2 МЛН
Can you actually see more than 30 FPS?
1:41:36
Tsoding Daily
Рет қаралды 14 М.
how NASA writes space-proof code
6:03
Low Level Learning
Рет қаралды 2 МЛН
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,1 МЛН
Your understanding of evolution is incomplete. Here's why
14:21
I regret doing this...
1:20:07
Tsoding Daily
Рет қаралды 61 М.
iphone fold ? #spongebob #spongebobsquarepants
0:15
Si pamer 😏
Рет қаралды 161 М.
5 НЕЛЕГАЛЬНЫХ гаджетов, за которые вас посадят
0:59
Кибер Андерсон
Рет қаралды 611 М.