CS 188 Lecture 25: Applications
57:45
CS 188 Lecture 23: Neural Networks
1:12:51
CS 188 Lecture 23: Optimization
1:17:24
CS 188 Lecture 20: Naive Bayes
1:14:45
8 жыл бұрын
CS 188 Lecture 19: Particle Filtering
1:19:07
CS 188 Lecture 17: Markov Models
1:13:57
CS 188 Lecture 15: Bayes Nets Sampling
1:19:37
CS 188 Lecture 14: Bayes Net Inference
1:11:09
CS 188 Lecture 11: Probability
1:24:10
8 жыл бұрын
CS 188 Lecture 8: MDPs II
1:24:27
8 жыл бұрын
CS 188 Lecture 7: MDPs I
1:25:24
8 жыл бұрын
CS 188 Lecture 5: Adversarial Search
1:23:00
CS 188 Lecture 4: CSPs
1:22:50
8 жыл бұрын
A* Graph Search Optimality
25:52
8 жыл бұрын
CS 188 Lecture 3 -- Informed Search
1:21:51
CS 188 Lecture 2 -- Uninformed Search
1:22:03
CS 188 Lecture 1 -- Introduction
1:23:07
Пікірлер
@gerpineda9983
@gerpineda9983 20 күн бұрын
Lecture starts at 26:06
@100KGNatty
@100KGNatty 20 күн бұрын
CS188 What are you doing?! You better not be jacking off to BILLY MAYS in there AGAIN
@inchasto
@inchasto 2 ай бұрын
Lecture 10 RL II ... kzfaq.info/get/bejne/f76le9uc1bWZdWw.htmlsi=dbvE4__-vxnpJ906
@shaheerziya2631
@shaheerziya2631 3 ай бұрын
Never expected I’d be learning about economics in a CS course.
@Mike-77-YT
@Mike-77-YT 4 ай бұрын
In case you were wondering, this channel's name is similar to another KZfaqr's, named CS188.
@user-wr2tb9zx8g
@user-wr2tb9zx8g 9 ай бұрын
new stuff start around 33:09
@ViviMagri
@ViviMagri Жыл бұрын
No lecture 10? ;-;
@gamersengeki7150
@gamersengeki7150 Жыл бұрын
Very well explained
@sengeki7101
@sengeki7101 Жыл бұрын
Thank you!
@Jax9835
@Jax9835 Жыл бұрын
Why C, East can be A
@yohannistelila8879
@yohannistelila8879 Жыл бұрын
Set the speed to1.25x to get Ben Shapiro's voice. Thank you for the lecture!
@owencoukell388
@owencoukell388 Жыл бұрын
So real, thank you
@spongebobsquarepants4576
@spongebobsquarepants4576 2 жыл бұрын
Thank you. This was a great refresher!
@TimothyZhou0
@TimothyZhou0 2 жыл бұрын
Audited this many years ago for fun. Still one of the clearest explanations I have ever come across!
@chocolatecharlie
@chocolatecharlie 2 жыл бұрын
Lecture 3 - Informed Search Search Heuristics A heuristic is a function that estimates how close a state is to a goal. It is designed for a specific search problem. Greedy Search . Strategy: expand a node that you think is closest to a goal state . Heuristic: estimate of distance to nearest goal for each state With a bad heuristic, greedy search can be arbitrarily bad (worst-case looks like a badly-guided DFS). A* Search The idea is to combine UCS and Greedy search. UCS orders by path cost, or backward cost, whereas Greedy search orders by goal proximity, or foward cost. A* orders the sum of those two costs (backward cost + foward cost). A* search is not optimal: we need estimates to be less than actual costs (underestimation). Admissible Heuristics A heuristic h is admissible (optimistic) if 0 <= h(n) <= h*(n) where h*(n) is the true cost to a nearest goal. Coming up with admissible heuristics is most of what's involved in using A* in practice. Optimality of A* Tree Search To be continued...
@thanhphat893
@thanhphat893 2 жыл бұрын
thank you, bro
@chocolatecharlie
@chocolatecharlie 2 жыл бұрын
Lecture 2 - Uninformed Search Designing Rational Agents An agent has two ways to interact with its environment: perceive and act. The algorithm is about taking an action depending on sensory input. Reflex Agents Reflex agents choose an action based on the current percept. They don't plan ahead. They may have memory or a model of the world's current state but they're not considering the future consequences of their actions. Reflex agents can be rational. Planning Agents Planning agents take decisions based on the hypothesized consequences of actions. They need a model to simulate how the world evolves and they must have a clear goal. . Optimal planning: if there are multiple paths to a goal, find the best possible . Complete planning: if there exists any solution at all, find it . Planning: seek all the ways to the goal . Replanning: define some intermediate goals Search Problems A search problem is a general framework for solving a certain type of problem where once you are able to frame the real-world problem you try to solve (as a search problem) will get you a plan from where you start to wherever you go. Algorithms: DFS, BFS, Uniform-Cost Search (= Dijkstra) A search problem consists of a state space, a successor function (that, given a current state and an action to take, computes the result state of the environment and the cost), and a start state and a goal test. The algorithm returns a solution, id est a sequence of actions (a plan) which transforms the start state to a goal state. /!\ It is important to remember that search problems are just models, which means that if you're trying to solve real world problem with these, they generally will not be perfect. There can be issues with the way the problem is modelized. State space The world state includes every last detail of the environment, whereas a search state keeps only the details needed for planning. . State Space Graphs A state space graph is a mathematical representation of a search problem where each node is an abstracted world configuration. The goal test is a set of goal nodes. Each state occurs only once. State space graphs are rarely fully built in memory since they're too big, but it's an useful idea. . Search Trees In a search tree, the nodes code a path from the start state (at the root of the tree). By following backwards the tree from a goal state to the root we get a plan, id est a solution to the problem. In a search tree, the same state can appear in several different nodes. In fact, there is a lot of repeated structures in a search tree. . In both cases (state space graphs and search trees), we construct on demand, and as little as possible. General Tree Search Algorithm ================================================================================ function TREE-SEARCH(_problem_, _strategy_) returns a solution or failure initialize the search tree using the initial state of _problem_ loop do if there are no candidates for expansion then return failure choose a leaf node for expansion according to _strategy_ if the node contains a goal state then return the corresponding solution else expand the node and add the resulting node to the search tree end ================================================================================ Which fringe node to explore depends on strategy. Depth-First Search (DFS) . Strategy: expand a deepest node first . Implementation: fringe is a LIFO stack (instead to check each time the depth) . Complete ? Yes, for finite search trees (otherwise you might get stuck in an infinite branch before reaching the goal node) . Optimal ? No, it finds the first solution in the tree without regarding how deep it is in that tree (Notation: b is the branching factor ; m is the maximal depth) . Time complexity: in the worst case, the size of the search tree O(b^m) . Space complexity: O(b*m) (keep in mind that there are only siblings on path to root) DFS outperforms BFS when memory is limited and/or the solutions are on the bottom. Breadth-First search (BFS) . Strategy: expand a shallowest node first . Implementation: fringe is a FIFO queue . Complete ? Yes . Optimal ? Yes, only if costs are all 1 (Notation: b is the branching factor ; s is the depth of the shallowest solution) . Time complexity: O(b^s) . Space complexity: O(b^s). BFS outperfoms DFS when there are loops (BFS is safe) and/or we care about getting shallow goals. Iterative Deepening The idea is to get DFS's space advantage with BFS's time / shallow-solution advantages. . Strategy: run a DFS with depth limit 1, then if no solution has been found start over with a DFS of depth limit 2, then 3 etc. . Time complexity: O(b^s), same as BFS so it's not wastefully redundant (!) Generally, most work happens in the lowest level searched. . Space complexity: O(b^s) Uniform Cost Search (UCS) BFS finds the shortest path in terms of number of actions but it does not find the least-cost path. . Strategy: expand a cheapest node first . Implementation: fringe is a priority queue (where priority is the cumulative cost) UCS processes all nodes with cost less than cheapest solution. If that solution costs C* and actions cost at least e, then the effective depth is roughly C*/e. . Time complexity: O(b^(C*/e)) . Space compexity: O(b^(C*/e)) . Complete ? Yes, assuming best solution has a finite cost and minimum action cost is positive . Optimal ? Yes
@Sharkmonger
@Sharkmonger 2 жыл бұрын
This really gets the joj done!
@jjax3661
@jjax3661 Жыл бұрын
man of culture
@benjizucker6091
@benjizucker6091 10 ай бұрын
If you want foundation repair no no no no no
@user-qf8ot9jw3l
@user-qf8ot9jw3l 2 жыл бұрын
41:00
@adibahmohima7163
@adibahmohima7163 2 жыл бұрын
tic tac toe begins at 27:44
@Pedritox0953
@Pedritox0953 2 жыл бұрын
Great video !
@calcifer7776
@calcifer7776 2 жыл бұрын
me: today no TV, just go through the CS188 lectures CS188 lecture: What about some Rick&Morty? me: sure, why not, I might as well rewatch the whole episode...
@pascalbercker7487
@pascalbercker7487 2 жыл бұрын
Great lecture but the sound audio is a bit harsh.
@theodoredalton3355
@theodoredalton3355 2 жыл бұрын
Lol CS 188 Cum Sock
@theodoredalton3355
@theodoredalton3355 2 жыл бұрын
Sussy af
@michaelcstorm3808
@michaelcstorm3808 2 жыл бұрын
lecture begins at 2:06
@michaelcstorm3808
@michaelcstorm3808 2 жыл бұрын
lecture begins at 3:15
@michaelcstorm3808
@michaelcstorm3808 2 жыл бұрын
lecture begins at 2:49
@michaelcstorm3808
@michaelcstorm3808 2 жыл бұрын
lecture begins at 4:14
@michaelcstorm3808
@michaelcstorm3808 2 жыл бұрын
Thank you for uploading
@sheerio105
@sheerio105 2 жыл бұрын
at 1:03:39 Davis wrote the equation of Q*(s,a) wrong, it ended with V*(s) whereas the typed out (the right one) has V*(s') at the end.
@saminislam5278
@saminislam5278 3 жыл бұрын
Hello. Where can i find the whole playlist?
@khaledsabri9458
@khaledsabri9458 3 жыл бұрын
kzfaq.info/sun/PL4fsLMhBK1A2oeH__2bXbZC8oH87lsQpe
@loercayt6146
@loercayt6146 3 жыл бұрын
Where’s the YTP
@spookyscaryskeletonsmith2840
@spookyscaryskeletonsmith2840 3 жыл бұрын
this is 0% JOJ
@adamtran5747
@adamtran5747 2 жыл бұрын
Lemme guess. You failed this class...
@spookyscaryskeletonsmith2840
@spookyscaryskeletonsmith2840 2 жыл бұрын
@@adamtran5747 100 percent unsatisfied
@ashwinbaskar
@ashwinbaskar Жыл бұрын
@@spookyscaryskeletonsmith2840 lmao
@ashwinbaskar
@ashwinbaskar Жыл бұрын
@@spookyscaryskeletonsmith2840 lmao
@loercayt6146
@loercayt6146 3 жыл бұрын
This is not what I’m expecting kzfaq.info/get/bejne/mNFjic-J0azTn4k.html this is what I expected
@average-osrs-enjoyer
@average-osrs-enjoyer 3 жыл бұрын
55:21 how do you solve this such that you get 3/7 and 4/7? I get 2/5 and 3/5
@lewkyb
@lewkyb 3 жыл бұрын
lecture starts at 3:23
@lewkyb
@lewkyb 3 жыл бұрын
lecture starts at 4:46
@superleekegshoondinovevo6998
@superleekegshoondinovevo6998 3 жыл бұрын
wait this isn't a ytp
@ginkgobiloba9088
@ginkgobiloba9088 3 жыл бұрын
Me too
@kennyanthony4319
@kennyanthony4319 3 жыл бұрын
Amazing thank you!
@mostinho7
@mostinho7 4 жыл бұрын
Some agents plan ahead (they simulate what effect their action would have on the environment) For example, a pacman agent will have a game of pacman simulated Agents that don’t plan ahead are called reflex agents. Reflex Pac-Man agent will just go to nearest dot, but if there’s a barrier it’ll get stuck, doesn’t think ahead to see what would happen if we try to overcome the barrier. 9:00 27:10 state space difference depending on the problem. If we’re a path finding problem then the state is simpler than for eating pacman dots problem. 29:27 state space size. The more variables we keep track of, the larger our state space will be. We only want to keep track of the relevant variables to solve our problem. Stopped at 39:00 todo continue
@jamalabdulwahid7303
@jamalabdulwahid7303 4 жыл бұрын
Why he is talking too fast ?
@faisalzia2201
@faisalzia2201 4 жыл бұрын
thanks, really helpful lecture to understand the concept of adversarial search with alpha beta prunning
@murahat98
@murahat98 4 жыл бұрын
Best explanation so far!!! Thanks a lot..
@nomoniker7261
@nomoniker7261 4 жыл бұрын
36:34 - 36:44
@andyeccentric
@andyeccentric 4 жыл бұрын
It's all here at my fingertits!
@srivatsarameshjayashree8085
@srivatsarameshjayashree8085 5 жыл бұрын
Any ideas which application is used to write ? Thanks.
@chemgreec
@chemgreec 5 жыл бұрын
lol..2016..back when pokemon go was a thing and it made sense to give an example including looking for pokemon around campus
@user-pt7tv
@user-pt7tv 5 жыл бұрын
What is a suboptimal goal?
@abdullahbezir3217
@abdullahbezir3217 3 жыл бұрын
Suboptimal solution means that it is not the best solution we have.
@user-pt7tv
@user-pt7tv 5 жыл бұрын
what is a state?
@charlesb317
@charlesb317 5 жыл бұрын
Look up cs188 lol
@od7738
@od7738 5 жыл бұрын
Thanks champ