No video

74 - Image Segmentation using U-Net - Part 2 (Defining U-Net in Python using Keras)

  Рет қаралды 127,232

DigitalSreeni

DigitalSreeni

Күн бұрын

The previous video in this playlist (labeled Part 1) explains U-Net architecture. This video tutorial explains the process of defining U-Net in Python using Keras API.
The code from this video is available at: github.com/bns...

Пікірлер: 167
@aelaeb6391
@aelaeb6391 2 жыл бұрын
I don't have words to explain how much is important your works for software engineers and biomedical engineers. Thank you.
@DigitalSreeni
@DigitalSreeni 2 жыл бұрын
Thank you very much for your kind comments.
@anthonymwangi6889
@anthonymwangi6889 4 жыл бұрын
Waaaar this is the best explanation in the whole world so far
@DeathlessLife786
@DeathlessLife786 Жыл бұрын
Really thank you sir...I am following your videos , they are helping me to do my research workst..... No one in this world is like you to teach to others without any secret. Your are delivering all to everyone very openly... A good hearted person.
@CristhianSanchez
@CristhianSanchez 3 жыл бұрын
Great great great... I would say, I took many courses before and that is why I can follow you. But your explanations are done in such way that I can understand the meaning of single details I did not know before or I've just forgotten.. Thanks for sharing your knowledge.!
@DigitalSreeni
@DigitalSreeni 3 жыл бұрын
Great to hear!
@likumahesh5694
@likumahesh5694 4 жыл бұрын
One of best videos on unet. Hands down
@ankitghosh8256
@ankitghosh8256 3 жыл бұрын
I loved the sleek implementation of the image normalization
@techshark7194
@techshark7194 4 жыл бұрын
Amazing works ....please keep uploading these pieces of stuff related to Biomedical Imaging!
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
I will try my best
@xiaoli4056
@xiaoli4056 3 жыл бұрын
Thanks very much, I learned a lot from your video, including the other channel ! Pure GOLD !
@DigitalSreeni
@DigitalSreeni 3 жыл бұрын
Great to hear!
@dianasalazar7897
@dianasalazar7897 2 жыл бұрын
Great video to understand the codding of a U-Net CNN!
@ilkercankat2993
@ilkercankat2993 3 жыл бұрын
These serie is incredible, thank you for your work and time.
@DigitalSreeni
@DigitalSreeni 3 жыл бұрын
My pleasure!
@adityasreekumar1601
@adityasreekumar1601 4 жыл бұрын
Hello Sir! Words can't explain how amazing work you're doing. Fantastic playlist and so much needed. Kudos! Also, video no. 75 is not there (Part 3 is missing) Can you please check? Thanks a lot.
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
Please check my playlist, I definitely see video 75.
@adityasreekumar1601
@adityasreekumar1601 4 жыл бұрын
@@DigitalSreeni Some of your videos are kept as private for example video no. 72. Maybe if you could just check your playlist from another user ID you'll get to know. Thanks
@laliborio
@laliborio 4 жыл бұрын
Definitely educational! Thank you.
@abubakrshafique7335
@abubakrshafique7335 3 жыл бұрын
the best video with the best explanation. Thumbs Up
@DigitalSreeni
@DigitalSreeni 3 жыл бұрын
Glad you think so!
@suvarnamaji3796
@suvarnamaji3796 3 жыл бұрын
very good explanation. Thank you for the effort you have put in.
@DigitalSreeni
@DigitalSreeni 3 жыл бұрын
Glad it was helpful!
@saumaydudeja7423
@saumaydudeja7423 Жыл бұрын
Goddamn, just came across this gem of a channel! Amazing work!
@imhungry48.o_o.
@imhungry48.o_o. Жыл бұрын
This is so very helpful. Thank you so much!
@HerrWortel
@HerrWortel 4 жыл бұрын
You just earned a subscriber. Well done!
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
Thanks for subscribing, I really appreciate it.
@briskminded9020
@briskminded9020 4 жыл бұрын
I also thinks soO he earned
@jenushadijafari6041
@jenushadijafari6041 Ай бұрын
I was wondering to ask if there is any youtube channel the same as yours but in pytorch???!tnx for being this amazing...
@zeeshanahmed3997
@zeeshanahmed3997 4 жыл бұрын
awesome video
@cristhian4513
@cristhian4513 4 жыл бұрын
Thank you for all the guidance :D
@talha_anwar
@talha_anwar 4 жыл бұрын
Why in Conv2DTranspose kernel_size=(2,2) instead of (3,3) and why do you used stride
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
Because you are concatenating in addition to up-convolution (Transpose). Please read the original paper, not sure if they explained it in detail. arxiv.org/pdf/1505.04597.pdf
@heshan3694
@heshan3694 Жыл бұрын
I understand the U-Net is an extension or it is based on fully convolutional network after reading several papers, and one advantage of FCN is that it can take inputs at any sizes. So I am wondering if it also works for u_net, for example, we just define the depth dimension for 3 (rgb images) and leave the width and height blank. Really great series, helped me a lot. Thanks
@abelworku8475
@abelworku8475 3 жыл бұрын
Thank you Very much for your nice educational tutorial!
@DigitalSreeni
@DigitalSreeni 3 жыл бұрын
You are welcome!
@08ae6013
@08ae6013 4 жыл бұрын
Hi Sreeni... Your videos are so good and they are crystal clear. Can you please explain what is the need of padding = 'same' for the first 'u6' during expansion path
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
With 'same' padding the layer's outputs will have the same dimensions as inputs. It automatically takes care of adding required padding to ensure that the input and output dimensions remain the same.
@DeathlessLife786
@DeathlessLife786 Жыл бұрын
@@DigitalSreeni Again Thanks a lot sir
@fabiancabrera4726
@fabiancabrera4726 7 ай бұрын
This video is gold, I’m new in all of this, but why dropout is equal to 0.1. Plus it vary to 0.2 in other lines. Is there a video that explains it? Thank you so much
@renarouou
@renarouou Жыл бұрын
Hello, as i understood that the concatenation function is for RGB, in case of grayscale am i supposed to concatinate?
@ramchandracheke
@ramchandracheke 4 жыл бұрын
Thank you very much!
@astratenebris1461
@astratenebris1461 Жыл бұрын
great video. Could someone explain why did he pick a 2x2 kernel in the transpose convolution in the upward/decoder path instead of the 3x3 he uses for the regular convolutions?
@janszczekulski3916
@janszczekulski3916 4 жыл бұрын
Hey, you mentioned that Conv2D Transpose is the exact opposite of Conv2D, isn't it more like the opposite of max - pooling ?
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
The opposite of max pooling is something like upsampling, in both cases you are just resizing images. Upsampling uses nearest neighbor based bilinear interpolation for upsampling. Very simple math and faster to execute. Conv2D Transpose is a convolution operation and the kernel is defined by learning during the training process. During this operation the image will be upsampled but based on learning during the training process. Here is some good explanation: towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d
@mohanjyotibaruah7374
@mohanjyotibaruah7374 4 жыл бұрын
How do I do segmentation for the real time images taken by webcam.
@mathgmathg923
@mathgmathg923 3 ай бұрын
Excellent! Please make a video about VAE! Variational Auto Encoder! 🥹
@ShakirKhan-th7se
@ShakirKhan-th7se Жыл бұрын
u9 = tf.keras.layers.concatenate([u9, c1], axis=3) What is the function of the parameter "axis=3" in the above given line?
@francudina8309
@francudina8309 4 жыл бұрын
Hi Sreeni, I'd like to know how to configure UNet model for different image resolutions (e.g. 250x250)? Thanks!
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
You can try existing architecture and just change the input image size. Print out the model summary to make sure everything seems logical. If not, try modifying parameters. You just need to try and see what works. Normally, with enough experience people can do this exercise in their mind (not me yet!).
@RowzatFaiz
@RowzatFaiz 4 жыл бұрын
Hello Sir! Firstly I must say your videos are total gem... amazing explanation:) thank you for all the guidance.. Can you please explain me the axis = 3 in line 61 of this video?
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
axis defines the axis along which you'd like to concatenate. Please change values to 1 or -1 to see the output dimensions. Now have a look at it with a value of 3, this shows the right dimensions of 128x128x32 - the shape of our concatenated dataset.
@manuelpopp1687
@manuelpopp1687 2 жыл бұрын
May I ask what dtype and dimensions the input X and y images should have? I'm using SPC Crossentropy and my model is not learning anything. I would like to check if the training generators produce the correct type of image, but I am not sure what the UNet expects.
@DigitalSreeni
@DigitalSreeni 2 жыл бұрын
You normalize/scale your input images so the dtype will be converted to float anyway. If the model is not learning, try a different loss function or learning rate.
@pacomermela6497
@pacomermela6497 3 жыл бұрын
Why you don't use the Unet model from segmentarion_models? What is different in both approaches?
@DigitalSreeni
@DigitalSreeni 3 жыл бұрын
You can use Unet from segmentation models but this video is for those who want to learn Unet and understand how to implement it. Also, writing your own code for Unet gives you more freedom in defining the architecture. Unet is just the name for architecture, you can modify encoder and decoder networks to your need and liking.
@pacomermela6497
@pacomermela6497 3 жыл бұрын
@@DigitalSreeni Thank you! I had some problems trying to apply an Unet model from segmentation_models due to incompatibilities between libraries. Your example could be a way to solve it. Thank you
@rakeshmothukuru6561
@rakeshmothukuru6561 2 жыл бұрын
Hi Sreeni, Thank you for the explanation but I have a query. You have used the loss as Binary Cross Entropy because it is Image Classification problem. But it is an Image Segmentation problem right? So, does that loss still hold good?
@DigitalSreeni
@DigitalSreeni 2 жыл бұрын
Isn’t image segmentation same as classification except at a pixel level? Instead of classifying the entire image, you are classifying every pixel. Still a classification problem.
@ExV6120
@ExV6120 4 жыл бұрын
Awesome explanation. But one question, why the dropout is set to 0.2 since c3 above?
@4MyStudents
@4MyStudents 2 жыл бұрын
its to avoid overfitting
@mdyounusahamed6668
@mdyounusahamed6668 Жыл бұрын
u9 = tf.keras.layers.concatenate([u9, c1], axis=3) why you used axis=3 here?
@zombietechz8361
@zombietechz8361 4 жыл бұрын
Amazing video, I followed the code rigorously but somehow I keep getting errors when i try to make the skip connections. I get errors for tf.keras.layers.concatenate([u8,c2]) such as this: ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 32, 32, 32), (None, 64, 64, 32)] Please let me know what is wrong. Thank you
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
For concatenation the dimensions of both arrays need to be the same, except for the last axis along which you are concatenating. In your case the first input has a dimension of 32x32x32 and the second one has 64x64x32. You need to make sure either the first one is 64x64x43 or the second one is 32x32x32. If you’re following my code then please make sure your code matches mine. Omitting one little thing can mess up code badly.
@SawsanAAlowa
@SawsanAAlowa 2 жыл бұрын
Thank you for posting this tutorial. what if I am using x-ray images the number of channels would be 1 right. what else would be changed if it is a gray scale image. please advise.
@nahidanazir3746
@nahidanazir3746 2 жыл бұрын
Amazing videos sir , I am facing the issue , after the model is built the training images are not resized it shows me 0/65 and get the undesired result .Could you please suggest me what is the issue
@fahadp7454
@fahadp7454 Жыл бұрын
From 3 Channels(R,G,B) how can we make 16 feauture maps? as 16 is not divisible by 3
@MuktoAcademy
@MuktoAcademy 7 ай бұрын
How I can get the exact code of this tutorial from the github folder?Can anyone say this?
@nunorodrigues3195
@nunorodrigues3195 4 жыл бұрын
Whats the logic for doing transpose convs instead of upsampling layers?
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
Transpose is the opposite of convolution in autoencoders whereas upsampling is the opposite of pooling. So depending on how you want to engineer the layers you pick the right method.
@hejarshahabi114
@hejarshahabi114 3 жыл бұрын
what if in your mask image you have more than 3 classes ( cow, horse, sheep), how the shape of final output would be? I appreciate your time and effort to teach us DeepLearning model.
@BigBrother4Life
@BigBrother4Life 2 жыл бұрын
x='Thank you' while DigitalSreeni > university_professors: print(x)
@mohamedbachiri7891
@mohamedbachiri7891 3 жыл бұрын
Hellow, how make tow GPU NVidia geforce 1080 rtx work together in same time for traning ?
@DigitalSreeni
@DigitalSreeni 3 жыл бұрын
Setting up GPU for Tensorflow can be a bit if challenge. There are a few videos on KZfaq and I hope they can help you. I tried to record a video on this topic but there are too many things to check and there are various configurations out there. SO a standard video is impossible. You need to check many sources. May be you have better luck than I.
@laviniatamang9329
@laviniatamang9329 Жыл бұрын
I have a doubt. Why different activation functions have been used in Convolution layers like in your case 'ReLU' and different one in the outputs i.e. 'Sigmoid'?
@laviniatamang9329
@laviniatamang9329 Жыл бұрын
And definitely, your videos on U-NET are my savior!
@DigitalSreeni
@DigitalSreeni Жыл бұрын
They serve different purposes. This may help: kzfaq.info/get/bejne/l7CEorigyLawl2g.html
@tilkesh
@tilkesh Жыл бұрын
Thanks
@iqbalhabibiehabibie5689
@iqbalhabibiehabibie5689 4 жыл бұрын
Learn something from this videos. I am trying to use your test_image.jpg from the first video of yours and contruct the convolution layers, it has some errors : ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() I change this step : inputs = tf.keras.layers.Input((image)) s = tf.keras.layers.Lambda(lambda x: x / 255) (inputs) from : inputs = tf.keras.layers.Input((IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS)) Is it possible doing this? Need your information about this. Also I want to ask how do you make Capture.jpg as you follow this for Image Segmentation for this video. thank you
@ahasanhabibsajeeb1979
@ahasanhabibsajeeb1979 3 жыл бұрын
Why the dropouts are different in different Layer?
@BareqRaad
@BareqRaad 3 жыл бұрын
thank you for this great demonstration. yet I have two a question why using Relu as an activation function in CNN layer and why change it to sigmoid at the last layer?
@vincentlee5143
@vincentlee5143 3 жыл бұрын
The relu activation function is actually to discard all the regions that are unlike a particular filter so that the output feature maps will only contain the feature that is found, not the features that are not similar to the filter. The last layer uses a sigmoid activation function in order to determine the probability of each pixel of the output image belonging to the positive class
@syedshahwaizbukhari4720
@syedshahwaizbukhari4720 2 жыл бұрын
Its standard when doing binary classification (2 classes) with a deep learning model we use 'Sigmoid' and if doing multiclass classification then use 'Softmax' at the last layer. Also, the number of neurons on the last layer should be equal to the number of classes. So in the case of two classes number of neurons will be 2 in the last layer and activation will be sigmoid and in case we have 4 classes number of neurons will be 4 and the activation function will be softmax at the last layer. @Bareq Raad
@BareqRaad
@BareqRaad 2 жыл бұрын
@@syedshahwaizbukhari4720 this is a generative model that will create image not classes that's why I asked this question
@SindyanCartoonJordanian
@SindyanCartoonJordanian 3 жыл бұрын
Thank you so much
@manuelpopp1687
@manuelpopp1687 2 жыл бұрын
When I use this exact U-Net (also with model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']), no changes at all) and I fit it to the augmented dataset* from video #177, accuracy increases to well over 0.9, but the resulting model predicts only "0". When I add mean IoU to the metrics, mIoU is going up and down, but stays below 0.2. Loss decreases from 0.028 with approx 0.0005 per epoch. What could possibly be cause such behaviour? *where I got the dataset, the values were in {0, 255}, so I changed them to uint8 in {0, 1}
@manuelpopp1687
@manuelpopp1687 2 жыл бұрын
I found the issue arises from the "sample weights" parameter of my data loader... Never mind.
@DigitalSreeni
@DigitalSreeni 2 жыл бұрын
There is immense satisfaction in successfully troubleshooting an issue :)
@zakariasaidi2191
@zakariasaidi2191 3 жыл бұрын
thank you for you're videos very informative. what version of tensorflow you are using in this tutorial, thankyou
@kebiriisamdine4980
@kebiriisamdine4980 3 жыл бұрын
did he respond you ?
@abbasagha9661
@abbasagha9661 11 ай бұрын
Thanks!
@DigitalSreeni
@DigitalSreeni 11 ай бұрын
Thank you very much.
@somaiatawfeek764
@somaiatawfeek764 4 жыл бұрын
can you give me the link of video that is for setting the GPU
@soniaamiri7815
@soniaamiri7815 Жыл бұрын
Hello, Thank you for your work and time. how can I applicate 219-unet_model_with_functions_of_blocks.py to mnist dataset
@thepaikaritraveller
@thepaikaritraveller 4 жыл бұрын
InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version what is this problem?
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
You seem to be working on an system with GPU for tensorflow but the CUDA version is not compatible. This can be a pain to resolve as you need to be careful about which CUDA and CuDNN versions aren’t compatible with which tensorflow and how they match the specific version of your GPU and its drivers. In other words, you don’t have GPU setup for tensorflow. Please google search for proper installation. If you just want to use CPU then uninstall tensorflow-gpu and only install tensorflow.
@thepaikaritraveller
@thepaikaritraveller 4 жыл бұрын
@@DigitalSreeni thank you so much ...all of your video series is awesome
@thepaikaritraveller
@thepaikaritraveller 4 жыл бұрын
@@DigitalSreeni can you please give me your email? i need some help . my thresholded image is full black
@JS-tk4ku
@JS-tk4ku 3 жыл бұрын
tks for your instruction, please tell me how to apply the training model to a 4k image(or larger) instead of using the same size of training data
@DigitalSreeni
@DigitalSreeni 3 жыл бұрын
You should be able to handle images of different sizes for fully convolutional neural networks. Did you try it on large image and did it fail? Of course, one way to handle is to divide the image into smaller patches.
@parassalunkhe6583
@parassalunkhe6583 Жыл бұрын
What is the name of the code file on github for this video. The github link you provided has many files, which one is of this video tutorial?
@DigitalSreeni
@DigitalSreeni Жыл бұрын
This video is numbered 74, so please look for code number 74 on GitHub. Here is the direct link: github.com/bnsreenu/python_for_microscopists/blob/master/074-Defining%20U-net%20in%20Python%20using%20Keras.py
@briskminded9020
@briskminded9020 4 жыл бұрын
its so helpful thanks
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
You're welcome!
@ms.t.swapna5555
@ms.t.swapna5555 3 жыл бұрын
getting an error File "C:\Users\swapna\anaconda3\lib\site-packages\tensorflow\python\keras\utils\generic_utils.py", line 778, in validate_kwargs raise TypeError(error_message, kwarg) TypeError: ('Keyword argument not understood:', 'kernal_initializer') runfile('C:/Users/swapna/OneDrive/Desktop/untitled2.py', wdir='C:/Users/swapna/OneDrive/Desktop') Traceback (most recent call last): File "C:\Users\swapna\OneDrive\Desktop\untitled2.py", line 20, in c1=tf.keras.layers.Conv2D(16,(3,3),activation='relu', kernal_initializer='he_normal',padding='same')(c1) File "C:\Users\swapna\anaconda3\lib\site-packages\tensorflow\python\keras\layers\convolutional.py", line 646, in __init__ super(Conv2D, self).__init__( File "C:\Users\swapna\anaconda3\lib\site-packages\tensorflow\python\keras\layers\convolutional.py", line 133, in __init__ super(Conv, self).__init__( File "C:\Users\swapna\anaconda3\lib\site-packages\tensorflow\python\training\tracking\base.py", line 457, in _method_wrapper result = method(self, *args, **kwargs) File "C:\Users\swapna\anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 318, in __init__ generic_utils.validate_kwargs(kwargs, allowed_kwargs) File "C:\Users\swapna\anaconda3\lib\site-packages\tensorflow\python\keras\utils\generic_utils.py", line 778, in validate_kwargs raise TypeError(error_message, kwarg) TypeError: ('Keyword argument not understood:', 'kernal_initializer')
@DigitalSreeni
@DigitalSreeni 3 жыл бұрын
Please watch this video to handle the error: kzfaq.info/get/bejne/qd96jdt12bLZmqc.html
@narayanamurty7586
@narayanamurty7586 4 жыл бұрын
It is a great video. Thanks for sharing knowledge. Sir, how to create a mask images from original images. Please make a video about it. Thanks
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
This is a common question that people ask me and it is really challenging as there is no easy to use tool. Please sign up for a free APEER.com account and we plan on releasing a tool soon (mid-July 2020). By the way, APEER is an online image analysis platform that is free for academia, non-profits and individuals.
@narayanamurty7586
@narayanamurty7586 4 жыл бұрын
@@DigitalSreeni thanks for information and I will sign up for that
@tonihullzer1611
@tonihullzer1611 2 жыл бұрын
Danke!
@DigitalSreeni
@DigitalSreeni 2 жыл бұрын
Thank you very much for your kind contribution Toni. Please keep watching.
@purvanyatyagi2494
@purvanyatyagi2494 4 жыл бұрын
Can we use unet architecture for image to image translation
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
Yes, of course. But I think GANs are better for domain transformation type applications. I need to find time to record videos on GANs. Hopefully sometime soon.
@purvanyatyagi2494
@purvanyatyagi2494 4 жыл бұрын
Waiting for gan. Thanks for the response
@kevalsharma1865
@kevalsharma1865 3 жыл бұрын
Does the image resolution matters here? I am getting error about different matching shapes when trying to put my resolution instead of yours.
@mdsuhail9198
@mdsuhail9198 3 жыл бұрын
anyone get any error while import the segmentation library ???
@aniketvashishtha4142
@aniketvashishtha4142 4 жыл бұрын
why not use data augmentation rescaling to get the floating pixel value?
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
You can perform data augmentation and other tricks to enhance the training data if you want. I am not sure why you would use it as a process to create floating values.
@aniketvashishtha4142
@aniketvashishtha4142 4 жыл бұрын
@@DigitalSreeni there is this rescaling option in keras under data augmentation in keras. Just asking is that is a way of doing this rescaling?
@rajshreehande3458
@rajshreehande3458 Жыл бұрын
Hello sir.....which IDE have you used to execute these codes?
@rajshreehande3458
@rajshreehande3458 Жыл бұрын
Can I use pycharm or vscode for this
@DigitalSreeni
@DigitalSreeni Жыл бұрын
You can use any IDE you want, the code doesn't care. So please pick the one you're comfortable with.
@abhishekreddy7219
@abhishekreddy7219 4 жыл бұрын
Hi fantastic video explaning sematic segmentation and btw are you a telugu guy
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
Yes, spent the first 21 years of my life in Hyderabad, love the city.
@abhishekreddy7219
@abhishekreddy7219 4 жыл бұрын
@@DigitalSreeni And now where are you?
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
San Francisco Bay Area.
@abhishekreddy7219
@abhishekreddy7219 4 жыл бұрын
@@DigitalSreeni In which company are you working? And this is d last question
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
Please check my LinkedIn profile, the link is on the channel main page.
@Irfankhan-jt9ug
@Irfankhan-jt9ug 3 жыл бұрын
Image masks can be created using which tool?
@datmanpires
@datmanpires 3 жыл бұрын
sensarea
@RizalAbulFata
@RizalAbulFata 2 жыл бұрын
excuse me, i want to ask you, i'm a new in a ML. i want to implement this unet for lung segmentation, am i need a lung mask for do that? can you suggest me a step for do that. thanks
@DigitalSreeni
@DigitalSreeni 2 жыл бұрын
Please go through my videos about U-net and segmentation to get a good understanding of what it is all about. I covered many topics using U-net and you will find answers to many of your questions. And yes, you need to provide ground truth for supervised deep learning and for semantic segmentation it is done via masks representing various labels. You can label your images many ways, the one I use is from www.apeer.com because this what we do at work.
@RizalAbulFata
@RizalAbulFata 2 жыл бұрын
@@DigitalSreeni ok, thanks for your answer, I'm really happy for watching your videos.
@tamilbala6239
@tamilbala6239 4 жыл бұрын
sir , i have modified the input size and regarding changes in c1, c2 etc but i have got error Lambda object has no attribute shape.why it is occur.
@fratcetinkaya8538
@fratcetinkaya8538 2 жыл бұрын
When I wrote the same code I always get the TypeError: Inputs to a layer should be tensors. Got: error for the c6 line of the code(which is c6 = keras.layers.Conv2D(128, (3, 3), activation = "relu", kernel_initializer = "he_normal", padding = "same")(u6) ) I couldn't find out the result or the reason from the popular sites. If you know why, pls help me :) thanks for all those videos. It's seem I'm going to learn all details of the field through you..
@DigitalSreeni
@DigitalSreeni 2 жыл бұрын
You seem to be mixing various keras imports. I see at least 2 variations, one where you import from tensorflow.python.keras and the other where you just have keras. Please make sure you follow one process. I recommend using tensofrlow.keras. This may fix your issue.
@fratcetinkaya8538
@fratcetinkaya8538 2 жыл бұрын
@@DigitalSreeni thanks for your relevancy, I've solved the problem by using concatenate instead of Concatenate :D they are different methods that are belong to keras. When trying to write code unless looking to original one, that sort errors are unfortunately possible to occur..
@MuhammadAwais-vi8de
@MuhammadAwais-vi8de 4 жыл бұрын
Video 72 and U-NET 3rd part (video 75) are missing
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
Thanks for letting me know. I removed video 72 as it contained information that may not help all viewers. I seem to have missed adding video 75 to the playlist, it is back now.
@saikrishnaYadhav
@saikrishnaYadhav 4 жыл бұрын
Hi, After segmentation how to do feature extraction and classify those features?
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
Not sure what you mean. Why would you do feature extraction after segmentation; in fact feature extraction is done to facilitate segmentation. Do you mean object measurement after segmentation?
@saikrishnaYadhav
@saikrishnaYadhav 4 жыл бұрын
@@DigitalSreeni If we didn't do feature extraction then how can we classify in the heart vessel detection dataset that it is normal or abnormal?
@marcusbranch2100
@marcusbranch2100 4 жыл бұрын
Hey, can you help me here? Please ValueError: Input arrays should have the same number of samples as target arrays. Found 670 input samples and 128 target samples. My code is exactly the same as yours and can't understand why this is happening
@matthewavaylon196
@matthewavaylon196 4 жыл бұрын
I've been seeing this in other versions of UNET in tensorflow, but why is the number of kernels used in the output layer 1? The paper said we can use the last layer to represent the number of classes, so in a binary segmentation case why is this 1 and not 2?
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
For binary classification we are trying to classify our input into either 0 or 1. This means we just have one output, either 0 or 1. For example, if our output is 1 we know it is dog and if it is 0 we know it is cat. There is no other possibility so one output is enough for us to classify. We use sigmoid for binary classification, you can also use softmax which would be same as sigmoid for binary problems. You can also treat it like a multiclass problem where you have two classes and use softmax that outputs probability for each class. If you convert the probability to classification you’ll get same result as you would with binary classification. I hope this clarifies your doubt.
@matthewavaylon196
@matthewavaylon196 4 жыл бұрын
Python for Microscopists by Sreeni That does help, but with a 0 or 1 is still 2 classes. Could you provide an example of when we would have 2 instead of 1 in the layer? I started out with Pytorch and this video really did a good job explaining why it’s 2. m.kzfaq.info/get/bejne/q5ecotx1qNWrknk.html
@matthewavaylon196
@matthewavaylon196 4 жыл бұрын
Is what you're saying that if we set the output to 2 filters, so 2 classes, and use softmax, then it is the same as you did here with 1 filter and sigmoid?
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
0 or 1 output is not two classes, it may be counterintuitive. Think of the output layer as giving only one output. That output would be either 0 or 1 which makes it binary.
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
Yes, exactly.
@santhapurharsha123
@santhapurharsha123 2 жыл бұрын
How do we apply Batch Normalization, if we want to apply? At what stage, should we apply? And how to use that?
@vijayrao1777
@vijayrao1777 4 жыл бұрын
is this program applicable to CPU version too
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
Yes, of course.
@cristhian4513
@cristhian4513 4 жыл бұрын
que chevere :3
@adithiajovandy8572
@adithiajovandy8572 4 жыл бұрын
How to define if unet multiclass? I dont know how to do it somebody please help me :)
@user-hm5tz9vb9v
@user-hm5tz9vb9v Жыл бұрын
which library is better to implement the network : Keras or PyTorch ?
@DigitalSreeni
@DigitalSreeni Жыл бұрын
Doesn't matter, it depends on your comfort. Keras is usually easy for most people, especially beginners.
@afsanaahsanjeny2065
@afsanaahsanjeny2065 4 жыл бұрын
You are amazing
@DigitalSreeni
@DigitalSreeni 4 жыл бұрын
I know :)
@HenrikSahlinPettersen
@HenrikSahlinPettersen 2 жыл бұрын
For a tutorial on how to do deep learning based segmentation without the need to write any code using only open-source free software, we have recently published an arXiv preprint of this pipeline with a tutorial video here: kzfaq.info/get/bejne/b8qEmbio07Kaqo0.html (especially suited for histopathological whole slide images).
@sorasora3611
@sorasora3611 2 жыл бұрын
Hello, how can I contact you? I am a master student from Iraq.... I work in semantic segmentation programming using u_net, the cityscapes database, but the program is not implemented by spider... How can you help me?
@DigitalSreeni
@DigitalSreeni 2 жыл бұрын
Sorry, I cannot help with personal projects. I wish I had that kind of time but unfortunately I do not. You can try posting your specific questions on my Discord server and see if someone can answer. discord.gg/QFe9dsEn8p
Gli occhiali da sole non mi hanno coperto! 😎
00:13
Senza Limiti
Рет қаралды 22 МЛН
Magic trick 🪄😁
00:13
Andrey Grechka
Рет қаралды 56 МЛН
73 - Image Segmentation using U-Net - Part1 (What is U-net?)
18:13
DigitalSreeni
Рет қаралды 368 М.
The U-Net (actually) explained in 10 minutes
10:31
rupert ai
Рет қаралды 98 М.
U-Net Architecture
7:41
Валентин Романов
Рет қаралды 5 М.
Everything about Image Segmentation with UNET visually explained!
10:52
Neural Breakdown with AVB
Рет қаралды 1,3 М.
Image Segmentation with K-Means Clustering in Python
17:23
NeuralNine
Рет қаралды 26 М.
Python Image Segmentation Tutorial (2022)
31:50
Mr. P Solver
Рет қаралды 71 М.
Gli occhiali da sole non mi hanno coperto! 😎
00:13
Senza Limiti
Рет қаралды 22 МЛН