Unlocking your CPU cores in Python (multiprocessing)

  Рет қаралды 288,380

mCoding

mCoding

Жыл бұрын

How to use all your CPU cores in Python?
Due to the Global Interpreter Lock (GIL) in Python, threads don't really get much use of your CPU cores. Instead, use multiprocessing! Process pools are beginner-friendly but also quite performant in many situations. Don't fall into some of the many traps of multiprocessing though, this video will guide you though it.
― mCoding with James Murphy (mcoding.io)
Source code: github.com/mCodingLLC/VideosS...
SUPPORT ME ⭐
---------------------------------------------------
Patreon: / mcoding
Paypal: www.paypal.com/donate/?hosted...
Other donations: mcoding.io/donate
Top patrons and donors: Jameson, Laura M, Vahnekie, Dragos C, Matt R, Casey G, Johan A, John Martin, Jason F, Mutual Information, Neel R
BE ACTIVE IN MY COMMUNITY 😄
---------------------------------------------------
Discord: / discord
Github: github.com/mCodingLLC/
Reddit: / mcoding
Facebook: / james.mcoding
MUSIC
---------------------------------------------------
It's a sine wave. You can't copyright a pure sine wave right?

Пікірлер: 226
@tdug1991
@tdug1991 Жыл бұрын
It's also worth noting that smaller chunk sizes may be better for unpredictably distributed job times, as one runner may randomly grab many expensive jobs, and lock the pool when the rest of the processes finish. Great video, as always!
@unusedTV
@unusedTV Жыл бұрын
Your video is about two years late for me! I was working on a heat transfer simulation in Python where we had to compare hundreds of different input configurations. I knew about the GIL and multiprocessing in general outside of Python, but had to figure out myself how to get it to work. Eventually I settled on a multiprocessing pool and it worked wonders, because now we could run 32 simulations in parallel (Threadripper 1950x). Quick caveat that I don't hear you mention: a lot of processors have hyperthreading/SMT (intel/amd respectively), showing double the amount of cores in the task manager. In our case we found that spawning a process for each physical core provided better results than using all logical cores.
@jesuscobos2201
@jesuscobos2201 Жыл бұрын
Love your videos. I usually watch all of them just for fun but this has enabled me to speed up a very heavy optimization for my science stuff. Ty for your dedication. I can ensure that it has real world implications :)
@jakemuff9407
@jakemuff9407 Жыл бұрын
Great video! Maybe some more "real world" examples would be useful. Knowing that my code *could* be parallelized and actually parallelizing the code are two very different things. I've found that knowledge of multithreading in python does not translate to automatic code speed up. And of course no two problems are the same.
@MrTyty527
@MrTyty527 Жыл бұрын
I think it is more about doing experiments on asyncio/threading/multiprocessing on your own - everyone has different Python use cases
@ibrahimaba8966
@ibrahimaba8966 Жыл бұрын
multithreading is for io bound tasks, i use multiprocessing with zeromq to do some extensive image processing tasks!
@chndrl5649
@chndrl5649 Жыл бұрын
Take crawling as example, it would be a huge time saver if you want crawl multiple words at a time
@chndrl5649
@chndrl5649 Жыл бұрын
It all depends on how you can split your work.
@v0xl
@v0xl Жыл бұрын
python is not the right tool for high performance applicatons anyway
@ArtML
@ArtML Жыл бұрын
\o/ Yay! Long waited multiprocessing video! Always appreciate the humor in intros! :D Thanks a lot, I am on a path of making parallelization / multiprocessing to become a second nature in my coding - these videos help greatly! More topic suggestions: - Simple speed-ups using GPUs - Panda speedup by Dask - unlocking multiple cores - Numba, JAX and the overview of JIT compilers - Cython, and the most convenient (easy-to-use) wrappers for C++ implementations - All about Pickling, best practices/fastest ways to write picklers for novel objects
@ajflink
@ajflink Жыл бұрын
And GPU speedups without Nvidia.
@michaelwang3306
@michaelwang3306 Жыл бұрын
The clearest explanation on this topic that I have ever seen! Really nice!! Thanks for sharing!
@SpeedingFlare
@SpeedingFlare Жыл бұрын
That pool thing is so cool. I like that it spawns as many processes as there are cores available. I wish my work had more CPU bound problems
@OutlawJackC
@OutlawJackC Жыл бұрын
Your explanation of the GIL makes so much more sense than other people :)
@michaellin4553
@michaellin4553 Жыл бұрын
The funny thing is, adding random noise is actually a useful thing to do. It's called dithering, and is used nearly everywhere in signal processing.
@tommucke
@tommucke Жыл бұрын
You would however apply it to the analog signal at about half the sampling rate in order of getting better results for the digital signal (and smoothen it with a capacitor afterwards). It makes no real sense to add it on the digital side which is the only thing python can do
@gamma26
@gamma26 Жыл бұрын
@@tommucke Unless you're doing image processing and want to achieve that effect I suppose. Pretty niche tho
@maxim_ml
@maxim_ml Жыл бұрын
It can be used as data augmentation in training a speech recognition model
@louisnemzer6801
@louisnemzer6801 9 ай бұрын
'I'm going to need those sound files with random noise added in my email inbox by five pm' 😅
@knut-olaihelgesen3608
@knut-olaihelgesen3608 Жыл бұрын
You are actually the best at advanced python videos! Love them so much
@lawrencedoliveiro9104
@lawrencedoliveiro9104 Жыл бұрын
5:07 Threading is also useful for turning blocking operations into nonblocking ones. For example, asyncio provides nonblocking calls for reading and writing sockets, but not for the initial socket connection. Simple solution: push that part onto a separate thread.
@robertbrummayer4908
@robertbrummayer4908 Жыл бұрын
Great video! Man, your videos are awesome. And every time I learn a little bit and get a little bit better, like you say :) Best wishes from Austria!
@codewithjc4617
@codewithjc4617 Жыл бұрын
This is great content, I’m a big fan of C++ and Python and this is just amazing
@zemonsim
@zemonsim Жыл бұрын
This video was so helpful ! I recently converted my mass encryption script to use multiprocessing. To encrypt my dataset of 450 Mb of images, it went from an estimated 11 hours to just 10 minutes, doing the work at around 750 Kb per second.
@piotradamczyk6740
@piotradamczyk6740 Жыл бұрын
I was looking for this kind of lessons for years. please do more.
@walterppk1989
@walterppk1989 Жыл бұрын
Brilliant video. Absolutely flipping gold
@joshuaowen1941
@joshuaowen1941 Жыл бұрын
I love your videos man! Absolutely love them!
@unotoli
@unotoli Жыл бұрын
So well explained. One nice2have thing - quick tip on how to debug (see summary of time2process) most cpu-intensive tasks (functions, like wav transformation in this case).
@nocturnomedieval
@nocturnomedieval Жыл бұрын
This is so good and clear. A must share. BTW, how these techniques relate to the case when you are using numba with option parallel=True?
@daniilorekhov9191
@daniilorekhov9191 Жыл бұрын
Would love to see a video on managing shared memory in multiprocessing scenarios
@StopBuggingMeGoogleIHateYou
@StopBuggingMeGoogleIHateYou Жыл бұрын
Great video. I was not aware of that module! As it happens, I've spent the last six weeks writing something that can run thousands of processes and aggregate the results. I'm not going to throw it away after watching this video, but I will ponder how I might've designed it differently had I known.
@HomoSapiensMember
@HomoSapiensMember Жыл бұрын
really appreciate this, struggled understanding differences between map and imap...
@imbesrs
@imbesrs Жыл бұрын
You are the only person i keep video notis on for
@neelroshania7116
@neelroshania7116 Жыл бұрын
This was awesome, thank you!
@eldarmammadov7872
@eldarmammadov7872 Жыл бұрын
liked the way to speak about all three modules asynchio, threading, multiprocessingin one vidoe
@flybuy_7983
@flybuy_7983 Жыл бұрын
THANK YOU MY BROTHER FROM ANOTHER COUNTRY AND ANOTHER FAMILY!!!
@mme725
@mme725 Жыл бұрын
Nice, might play with this when I get off work later!
@volundr
@volundr Жыл бұрын
This is very useful, thank you
@Darios2013
@Darios2013 Жыл бұрын
Thank you for great explanation
@Lolwutdesu9000
@Lolwutdesu9000 Жыл бұрын
While I'm not using multi-threading in my current work, I'll definitely save this video so I can one day return to it!
@etopowertwon
@etopowertwon Жыл бұрын
Multiprocessing helped me a lot recently. I had a script that periodically loads lots tons of small XML from netshare, process them and save locally, single thread ran in 30 seconds, multiprocessed ran in about 6 seconds.
@codedinfortran
@codedinfortran 5 ай бұрын
thank you. This made it all very clear.
@pamdemonia
@pamdemonia Жыл бұрын
It's really interesting to see the threading results, (avg ~.2.5sec per file, but only 7.6 sec total). Cool.
@SkyFly19853
@SkyFly19853 Жыл бұрын
Very useful for video game development.
@POINTS2
@POINTS2 Жыл бұрын
Yes! Pool is the way to go. Definitely an improvement the threading and allows you to not have worry about the GIL.
@jonathandawson3091
@jonathandawson3091 Жыл бұрын
Not always an improvement. A process costs a lot more overhead as he explained in the video. Other languages don't have the stupid GIL, hope that it's also removed from python someday.
@sebastiangudino9377
@sebastiangudino9377 Жыл бұрын
@@jonathandawson3091 It's a safety measure, it'll probably never be removed from python. If you really need "unsafe" threads you could probably just write your threaded function from c and inter-opt it with python. What her that's actually worth it is up you you but a lot of times it is not
@lawrencedoliveiro9104
@lawrencedoliveiro9104 Жыл бұрын
The GIL is an integral part of reference-counting memory management. Getting rid of it completely means moving to Java-style pure garbage collection, where even the simplest of long-running scripts could end up consuming all the memory on your system. There is a project called “nogil”, which sets out to loosen some GIL restrictions a bit. That should give some useful speedups, without abandoning the GIL altogether.
@GaryHost-qs9pg
@GaryHost-qs9pg 7 ай бұрын
Very well done video. thank you
@mostafaomar5441
@mostafaomar5441 3 ай бұрын
Very useful. Thank you so much.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 Жыл бұрын
9:27 Actually, there is a faster way of sharing data between processes than sending picklable objects over pipes, and that is to use shared memory. Support for this is built into the multiprocessing module. However, you cannot put regular Python objects into shared memory: you have to use objects defined by the ctypes module. These correspond to types defined in C (as the name suggests): primitive types like int or float, also array and struct types are allowed. But avoid absolute pointers.
@user-uc6wo1lc7t
@user-uc6wo1lc7t 5 ай бұрын
Aren't Managers a way to store shared python classes (via register)?
@iUnro
@iUnro Жыл бұрын
Hello. Can you explain what is the difference between multiprocessing and concurrent futures package? For me they look the same so I wonder why did you chose one over another.
@AntonioZL
@AntonioZL Жыл бұрын
Very useful. Thanks!
@matejlinek287
@matejlinek287 Жыл бұрын
Wow, finally a mCoding video where I didn't learn anything new :-D Thank you so much James, now I can rest in peace :)
@zlorf_youtube
@zlorf_youtube Жыл бұрын
Me learns a new python thing... Starts using it in every fucking uncessary place. Feels good. Really good thing to talk about typical pitfalls.
@riccardocapellino9078
@riccardocapellino9078 Жыл бұрын
I tested this on my old code used for my thesis, which basically performs the same calculation hundreds of times with no I/O (calculates flows in a aircraft engine turbine stage). Took me 10 minutes to adjust the code and made it 40% FASTER
@akalamian
@akalamian Жыл бұрын
Great teaching, simple and effective, I've using this Multiprocessing with my coroutin, my program is flying, lol
@s7gaming767
@s7gaming767 Жыл бұрын
This helped a lot thank you
@austingarcia6060
@austingarcia6060 Жыл бұрын
I was about to do something involving multithreading and this video appeared. Perfect!
@dlf_uk
@dlf_uk Жыл бұрын
What are the benefits/drawbacks of this approach vs using concurrent.futures?
@bersi3306
@bersi3306 Жыл бұрын
Answer reside in the difference between concurrency and parallelism. When to use them also makes a lot of difference (here "CPU bounds" problems to solve with parallelism vs "I/O bounds" problems to solve with concurrency). You should also check (in the concurrent side) the difference between a Threaded function vs a coroutine.
@talhaibnemahmud
@talhaibnemahmud Жыл бұрын
Much needed video. I recently had to use multiprocessing for Image Processing & AI Game Assignment at the university. Although I used concurrent.futures.ProcessPoolExecutor() , this seems like a good option too. Maybe a comparison between these different options? 🤔
@plays1361
@plays1361 Жыл бұрын
Great video, the program works great
@SalmanKHAN.01
@SalmanKHAN.01 Жыл бұрын
Thank you!
@hicoop
@hicoop Жыл бұрын
Such a good video!
@tanveermahmood9422
@tanveermahmood9422 Жыл бұрын
Thanks bro.
@jaimedpcaus1
@jaimedpcaus1 Жыл бұрын
This was a great Vid. 😊
@quillaja
@quillaja 19 күн бұрын
god that's so much easier than what i've been doing writing all the coordination junk around queue
@cjsfriend2
@cjsfriend2 Жыл бұрын
You should do a video on using logging alongside with the multiprocessing pool
@dinushkam2444
@dinushkam2444 Жыл бұрын
Great video Very interesting stuff
@aleale550
@aleale550 Жыл бұрын
Great video! You could do a follow up parallel computing video using Dask?
@EvanBurnetteMusic
@EvanBurnetteMusic Жыл бұрын
This is great! Thanks! Would love a guide on how to use shared memory with multiprocess. I've been optimizing a wordle solver that looks for five words with 25 unique letters as in the recent Stand Up Maths video. On my 8 core machine, each subprocess ends up using half a gig of memory! My data structure is a list of variable length sets. With pool I have to resort to pool.starmap(func, zip(argList1, argList2)) to pass all the data I need into each subprocess. Compared with my naive manual multiprocess implementation, the mp pool version is 30% slower. I'm hoping it can be faster with shared memory. Again, I really appreciate that you created an almost real world problem to demonstrate multiprocessing. It gave me the context I needed to implement this with my program.
@volbla
@volbla Жыл бұрын
I tried using multiprocessing on my prime number sieve where each process have to write to the same array. It didn't really end up being faster (i'm probably bottlenecked by ram speed), but i did get the shared memory to work with numpy arrays. In your main process you do: shared_mem = SharedMemory(name = "John", create = True, size = #bytes) an_array = np.ndarray((#elements,), dtype = #type, buffer = shared_mem.buf) # Put your data in the array And in each subprocess you reference the memory by basically doing the same thing again. shared_mem = SharedMemory(name = "John") an_array = np.ndarray((#elements,), dtype = #type, buffer = shared_mem.buf) # Do something with the data In this case it was also useful to pass the process inputs through a Queue rather than function arguments. Then they only have to be instantiated once, even when consuming a lot of unpredictable data.
@EvanBurnetteMusic
@EvanBurnetteMusic Жыл бұрын
@@volbla Thanks for the queue tip I will definitely be trying that out!
@cute_duck69x3
@cute_duck69x3 Жыл бұрын
Awesome voice and helpfull video 😍
@anihilat
@anihilat Жыл бұрын
Great Video!
@pschweitzer524
@pschweitzer524 8 ай бұрын
Now the question: would running threads within each multiprocess process be even faster?
@goowatch
@goowatch Жыл бұрын
You should preferably use per-core display to better show what you want to explain. Thanks for sharing your experience.
@nikolastamenkovic7069
@nikolastamenkovic7069 Жыл бұрын
Great one
@ali-om4uv
@ali-om4uv Жыл бұрын
It would be great if you could show if this can be used for Ml hyperparamerer tuning and other Ml tasks.
@mingyi456
@mingyi456 Жыл бұрын
Please make a video about pickable objects and pickling, I would like to know more about it.
@steinnhauser3599
@steinnhauser3599 Жыл бұрын
Awesome!
@user-zu1ix3yq2w
@user-zu1ix3yq2w Жыл бұрын
i went down a rabbit hole, MP, numba, cython, pypy... The speedup people can get is insane.
@nocturnomedieval
@nocturnomedieval Жыл бұрын
Could you please help me to find the answer: numba with option parallel=True how it relatesto cores/threads/process? @D:
@felixfourcolor
@felixfourcolor Жыл бұрын
More videos on threading/asyncio please 😊
@lakeguy65616
@lakeguy65616 7 ай бұрын
I have a somewhat related question(s). I have a function where I open a file, perform a number of functions and then write the file to disk. without multiprocessing, it takes 1-2 minutes per file. I've modified my code to take advantage of the multi-cores on my pc. Its reduced the time by a factor of 3+. My problem is that its maxing out the CPU at 100% until the function finishes which means I can't use the pc for any other purpose while the multiprocessing is taking place. Heres my question. How can I reduce the work load on the CPU (even if it takes a little longer)? To process 100 files take at least 45 minutes. eventually I have 500+ files to process.... Any ideas? thank you!
@therelatableladka
@therelatableladka 6 ай бұрын
from multiprocessing import Pool # Specify the number of cores to use num_cores = 4 # Change this to the desired number of cores with Pool(processes=num_cores) as pool: # Your code here Hope it helps
@pranker199171
@pranker199171 Жыл бұрын
Please do some more real world examples this is amazing
@JohnZakaria
@JohnZakaria Жыл бұрын
If numpy / scipy do the computations in C land, why don't they release the GIL and aquire it back when the computation is done? When writing a C++ module using pybind11, you have the option to release the Gil, granted that you are doing pure C++.
@julius333333
@julius333333 Жыл бұрын
pretty sure it does
@JohnZakaria
@JohnZakaria Жыл бұрын
@@julius333333 if it did, then threads would speed up the computation. Just like i/o calls that do release the GIL
@jheins3
@jheins3 Жыл бұрын
Not an expert but far and based on your comment, you probably know 100x more than I do. With that being said I am going to speculate that the traditional behavior of numpy/scify follows a standard api call to an external C/C++ optimized library (a dll in windows). The API is essentially a function that initiates the c-land magic. For error handling and for how the GIL works, the function call waits to receive the output from c-land before handing it back. Because the API is essentially a function call, the GIL cannot be released till the function returns. Again that's a guess.
@Kamel419
@Kamel419 Жыл бұрын
I had to solve a complex problem similar to this and ended up needing to use a specific sequence of queues and workers to solve it. I think I ended up with 6 total workers, each with a "parent" worker flowing into it. I think it would be neat to showcase something like this
@maxtulgaa7360
@maxtulgaa7360 Жыл бұрын
Thx
@maheshcharyindrakanti8544
@maheshcharyindrakanti8544 Жыл бұрын
took me a while due to mistake, but it works thanks
@pawemalinowski4838
@pawemalinowski4838 Жыл бұрын
Great Vid! When I learned to use mp was via Process object. Latest application was training TF models on GPU. I got some optimizing algorythm that searches for best Hyperparameters on models. Calculations for next parameter set to check take some time (after 50 points takes a lot of time tbh - longer then model training). So I created mp.Process() objects that deals with parameter search, and then communicates (via mp.Pipe() ) to process that builds and trains models on GPU (to avoid multiple processes access hardware the same time). Usage of mp.Queue helps with communication ;) It works great! keeps both GPU and CPU cores busy all the time :D but I've never had to use Pool though :P So mp.Process is closer to me :D
@lawrencedoliveiro9104
@lawrencedoliveiro9104 Жыл бұрын
2:07 Remember that “I/O” can also include “waiting for a user to perform an action in a GUI”.
@tobiasbergkvist4520
@tobiasbergkvist4520 Жыл бұрын
On Linux/macOS you can use the fork-syscall to "send" things that can't be pickled, but only when using `Process`, and not when using `Pool`, since the process needs to get all the unpickleable data at startup, and can't receive it after it has started. The child processes inherits the parents memory with copy-on-write when using `fork`, meaning it only creates a copy of the memory if an attempt to modify it is made.
@czupryn0135
@czupryn0135 Жыл бұрын
If i have an lost of x,y coordinats and i need to calculate distance between each one of them. so to make it faster i cut the 1000 elements array into 5 samller 200 elemnts arrays. than how do i make fisrt core process 1 array, second core the 2 one and so on?
@chrysos
@chrysos Жыл бұрын
I feel more than just informed
@Roule_n_Scratche
@Roule_n_Scratche Жыл бұрын
Hey mCoding, could you make an video about Cython?
@rabin-io
@rabin-io Жыл бұрын
Any chance for a follow-up using this inside of a Class? And compare it with pathos.multiprocessing?
@firefouuu
@firefouuu Жыл бұрын
I still not sure why the wavfile.read is able to run in parallel thread despite the GIL. Is it just because it's C code ? So, if for any reason this was written in pure python this would not work ?
@alexh7849
@alexh7849 Жыл бұрын
Hey mcoding! I think a video discussing how False == 0 in python would be neat (especially since it caused a bug in prod for me lol), it was unexpected they would implement that and make bool subclass int as modern langs like rust/go have ditched the low level concept of bools being ints, maybe include the history of c bools too?
@nocturnomedieval
@nocturnomedieval Жыл бұрын
I think it was already discussed in the video about 25 common python errors. Take a look
@joshinils
@joshinils Жыл бұрын
A video on how to figure out which pieces take the most time and optimizing for time would be great. what profilers are there for python, how do i use them, how do i use them right?
@peterfisher3161
@peterfisher3161 Жыл бұрын
"what profilers are there for python" Spyder and PyCharm have built in profilers.
@joshinils
@joshinils Жыл бұрын
@@peterfisher3161 ah, so I'd have to use those IDEs, not VS code... ok I'd rather have some cli solution or one that works with vs code.
@replicaacliper
@replicaacliper Жыл бұрын
Scalene is an amazing profiler especially on Linux
@peterfisher3161
@peterfisher3161 Жыл бұрын
@@joshinils Quickly looking up I found cProfile, which is a built-in and can be used from the terminal. Not much popped up on VS code.
@jbusa5dimvzgkiik
@jbusa5dimvzgkiik Жыл бұрын
I've found yappi + gprof2dot to be really useful to find where asyncio applications are spending the CPU time.
@mcnica89
@mcnica89 Жыл бұрын
What is this, a CPU monitor window for ants? It needs to be at least 3 times as big! Joking aside, I enjoyed the video and learned something! The pitfalls are especially helpful. Thank you :)
@necbranduc
@necbranduc Жыл бұрын
Awesome! What about using apply_async vs map?
@renancatan
@renancatan Жыл бұрын
very nice! Just a hint, you know so much about classes, functions, etc Why not make an OOP for beginners? Much beginners/interdemediary still struggle with the most basic expressions from classes..
@botondkalocsai5322
@botondkalocsai5322 Жыл бұрын
Can async be used in conjuction with multiprocessing?
@replicaacliper
@replicaacliper Жыл бұрын
I'm using Numba to optimize a program and I'm getting ~100% CPU usage. I want to run this program multiple times with independent parameters. In this case, would multiprocessing provide any real benefit over running the program one at a time?
@SageBetko
@SageBetko Жыл бұрын
If Numba is already fully utilizing all CPU cores, then no, the overhead of adding Python’s multiprocessing into the mix will probably just slow things down.
@emilfilipov169
@emilfilipov169 Жыл бұрын
OMG you used start and end time as part of the code?!?!?! In the meantime i get rejected on an interview because i didn't know how to write a decorator to do that same task.
@m0Ray79
@m0Ray79 Жыл бұрын
And don't forget that pure Python is not an only option. Pyrex, which is translated to C/C++, opens even more broad bridges towards performance.
@user-xh9pu2wj6b
@user-xh9pu2wj6b Жыл бұрын
Why use Pyrex when there's Cython tho?
@m0Ray79
@m0Ray79 Жыл бұрын
@@user-xh9pu2wj6b Pyrex is a python language superset. Cython is its translator. I metioned it in my videos.
@user-xh9pu2wj6b
@user-xh9pu2wj6b Жыл бұрын
@@m0Ray79 Cython is also a python superset tho. And no, Cython isn't a translator for Pyrex, it's a separate thing that was influenced by Pyrex back then. And Pyrex is kinda dead with its last stable release being 12 years old.
@m0Ray79
@m0Ray79 Жыл бұрын
​@@user-xh9pu2wj6b The syntax and the whole idea was introduced in Pyrex, I'm still calling it the old name. Ok, let's say Pyrex became Cython. And the file extension is still .pyx.
@TheGmodUser
@TheGmodUser Жыл бұрын
Soo, what do you do of the object isn't pickable?
@unperrier5998
@unperrier5998 Жыл бұрын
Can't wait for PEP 554 multiple interpreters to be mainline.
@angmathew4377
@angmathew4377 Жыл бұрын
Beautifull just remebered me c and c# world.
@user-fe2oh8oj2u
@user-fe2oh8oj2u Жыл бұрын
Are there any potential dangers/threats when using these methods? I understand that you can slow your program down instead of giving it speed, but besides that ? Any dangers to the computer itself or the data source (if it is coming from a database) ?
@tehseensajjad1003
@tehseensajjad1003 Жыл бұрын
Im learning stuff myself though here's what i can say about databases. Corrections/additions are welcome. Usually there are specialized drivers for doing stuff asynchronously with the database. Also ACID should take care of not ruining the database. As for damage to the computer, no. This is the intended way of doing things in a multi core processing unit. Dont be scared to push your computer. Altough the robot uprising hasnt happened yet, its safe to say, Computers are not humans.
@user-fe2oh8oj2u
@user-fe2oh8oj2u Жыл бұрын
@@tehseensajjad1003 , thank you for your reply. I am planning to experiemnt with some of these methods for my projects. Let's see how many "time" gains it will give.
@tehseensajjad1003
@tehseensajjad1003 Жыл бұрын
@@user-fe2oh8oj2u It can get very confusing trying to design your program around doing stuff parallel or concurrent at first, but it'll click one day. Good luck friend.
@etopowertwon
@etopowertwon Жыл бұрын
You don't want to do non-atomic operations that can leak outside. Like in SQL check first something with SELECT and INSERT it if it was not found "if not sql("SELECT Id FROM Table WHERE Table.Foo=1"): sql("insert into Table(Foo) values(1)")" Two processes can try to insert the same value to the table at the same time.
@caiomazzaferroadami
@caiomazzaferroadami 5 ай бұрын
Can somebody help me out? I'm trying to put some of the things he mentioned in the video in practice and ran through something weird. In 5:50, he uses the iterable object from pool.imap_unordered() to print the return arguments from 'etl' function (filename and duration) for each element in the sounds list. I'm trying to do something similar, but my function (equivalent to his 'etl') returns just one argument instead of two. However, when I try to print each element from that iterable object, my program just freezes and I have to kill it. I can't figure out what's wrong. Note: when I convert it into a list, i. e. list(pool.imap_unordered(fcn, iterable)), it seems to work fine for some reason.
@anon_y_mousse
@anon_y_mousse Жыл бұрын
Once upon a time, multi-processing required multiple full CPU's, so it's a very understandable speako. It might also show your age. Although, it might make for an interesting video to make a Beowulf cluster with RPi's and show how to program it to calculate something in parallel. Pi itself is obvious and easy, but perhaps how to do video encoding or 3D scene rendering would be a great fit.
@harrytsang1501
@harrytsang1501 Жыл бұрын
The best way to talk about multiprocessing and task scheduling is with RTOS. The important parts are in some 2000 lines of C and it's amazing for embedded systems
@anon_y_mousse
@anon_y_mousse Жыл бұрын
@@harrytsang1501 It might be pretty cool if he did a whole video series showing beginner methods in one and more advanced methods in an other. Using RPi OS with Python for the beginner series, RTOS and C for the more advanced.
@paraschavan014
@paraschavan014 Жыл бұрын
You Are Amazing 😊✨
@imnotkentiy
@imnotkentiy Жыл бұрын
-It is the end -ha. All this time i've only been using 1/16 of my true power, behold -nani?!
@ewerybody
@ewerybody 5 ай бұрын
This is cool and all for relatively small python scripts. What if I have a UI (maybe Qt for Python) and want to kick off some work on a pool of processes. I wouldn't want these processes to load (or even execute) any of the UI code 🤔
Every Python dev falls for this (name mangling)
14:11
mCoding
Рет қаралды 135 М.
Python is NOT Single Threaded (and how to bypass the GIL)
10:23
Jack of Some
Рет қаралды 105 М.
How To Choose Ramen Date Night 🍜
00:58
Jojo Sim
Рет қаралды 53 МЛН
Godzilla Attacks Brawl Stars!!!
00:39
Brawl Stars
Рет қаралды 10 МЛН
NO NO NO YES! (50 MLN SUBSCRIBERS CHALLENGE!) #shorts
00:26
PANDA BOI
Рет қаралды 68 МЛН
There's a Faster SMB Protocol You Probably Don't Know About!
13:43
Python dataclasses will save you HOURS, also featuring attrs
8:50
Python's collections.abc | InvertibleDict
14:00
mCoding
Рет қаралды 43 М.
threading vs multiprocessing in python
22:31
Dave's Space
Рет қаралды 552 М.
Demystifying Python's Async and Await Keywords
1:18:53
JetBrains
Рет қаралды 97 М.
Python's 5 Worst Features
19:44
Indently
Рет қаралды 66 М.
Threading vs. multiprocessing in Python
15:18
Carberra
Рет қаралды 2,2 М.
Introducing GPT-4o
26:13
OpenAI
Рет қаралды 3,8 МЛН
😱НОУТБУК СОСЕДКИ😱
0:30
OMG DEN
Рет қаралды 240 М.
APPLE УБИЛА ЕГО - iMac 27 5K
19:34
ЗЕ МАККЕРС
Рет қаралды 97 М.