ChatGPT does Physics - Sixty Symbols

  Рет қаралды 638,043

Sixty Symbols

Sixty Symbols

Жыл бұрын

We put chatbot ChatGPT to the test with some physics questions.
More links and info below ↓ ↓ ↓
Featuring Professor Philip Moriarty... His blog on this topic can be found at: muircheartblog.wpcomstaging.c...
ChatGPT: chat.openai.com/auth/login
More videos with Professor Moriarty: bit.ly/Prof_Moriarty
See our computer science sister channel Computerphile: bit.ly/XqEDYi
Visit our website at www.sixtysymbols.com/
We're on Facebook at / sixtysymbols
And Twitter at / sixtysymbols
This project features scientists from The University of Nottingham
bit.ly/NottsPhysics
Patreon: / sixtysymbols
Sixty Symbols videos by Brady Haran
www.bradyharanblog.com
This video filmed by Sean Riley
Email list: eepurl.com/YdjL9

Пікірлер: 1 700
@krautbrain
@krautbrain Жыл бұрын
ChatGPT is very human in the sense that it doesnt like to say "I dont know" when it doesnt know.
@dickybannister5192
@dickybannister5192 Жыл бұрын
uh huh. I think it sounds to me to be very resonant with the sort of over-competitive over-confident narcisstic nutjobs that wrote it. 20 years ago, if you said to someone in IT I think there is a problem they would listen to you, try it themselves and then spend some time thinking about what else they might have missed going forward as a result or in a similar vein. 10 years ago they would just rush to check and fix it (JFDI!!). now they just tell you not to be so negative and stop complaining about stuff and be happy that you have this marvelllous sh*t they wrote.
@thePyiott
@thePyiott Жыл бұрын
Very often when you get that response you can phrase the question differently and suddenly its a master on that topic
@doggosuki
@doggosuki Жыл бұрын
i didnt realise people did this until in my late teenage years since i am autistic, so i thought everyone else was just really smart and that there was something wrong with me until i realised most people just and pretend they know things for reasons i don't completely understand
@DaxLLM
@DaxLLM Жыл бұрын
That's all you have to do is ask it Are you sure that's the right answer? It will go back out and check.
@ameerhamza4816
@ameerhamza4816 Жыл бұрын
it actually does not know that it does not know
@mighty8357
@mighty8357 Жыл бұрын
As a teacher I have tried to use ChatGPT with my students. The goal was to find specific errors of the answers. On one hand, the students had to show a real grasp on the material being taught to be able to find reasoning mistakes. On the other hand, it was a way to show the students to not blindly use this tool to avoid having to do their homework in other courses.
@renvesir8713
@renvesir8713 Жыл бұрын
that one is tricky because you can "train" chatgpt by pointing out mistakes or just asking it what it got wrong
@Bibibosh
@Bibibosh Жыл бұрын
You tried?
@Bibibosh
@Bibibosh Жыл бұрын
@@renvesir8713 it would reply " nothing is wrong"
@alexgravenor
@alexgravenor Жыл бұрын
@@renvesir8713 I tried this, it just gave me more wrong answers that were differently wrong
@renvesir8713
@renvesir8713 Жыл бұрын
@@Bibibosh weird i did some math questions with it and every single time i spotted a mistake i would ask it what did you get wrong and it would politely thank me for spotting the mistake and then it would correct itself
@justalittlestretch9404
@justalittlestretch9404 Жыл бұрын
When I first heard about ChatGPT, I was having some physics fun with it. This was one of my favorites: "Explain the Unruh Effect like a gangster"..."The Unruh Effect is a theoretical prediction of quantum mechanics that states that an accelerating observer will perceive the vacuum of empty space as being filled with particles. It's like a gangster who is constantly on the move, always hustling and shaking things up, will see opportunities and threats everywhere, even in places where someone standing still might not."
@bidyo1365
@bidyo1365 Жыл бұрын
🤣
@mutatachi
@mutatachi Жыл бұрын
This is what I got: Listen up, son. The Unruh effect is like a big shot physics boss, rollin' in the quantum mechanics world. So, ya see, in a vacuum, it's all calm and peaceful. But when ya introduce acceleration, it's like stirrin' up trouble. The vacuum starts actin' all heated up, causin' particles to pop up outta nowhere. That's the Unruh effect, gettin' all heated up in the presence of acceleration. It's like the real deal, straight up science, ya dig?
@artificialhits
@artificialhits Жыл бұрын
Ask it to explain fractional reserve banking as nicki minaj rap song
@bidyo1365
@bidyo1365 Жыл бұрын
@@mutatachi wow hahahaha
@Stopinvadingmyhardware
@Stopinvadingmyhardware 8 ай бұрын
Not really.
@Aziraphale686
@Aziraphale686 Жыл бұрын
Like the Professor said, GPT is just a language model, the fact that it can do math AT ALL is kind of amazing.
@vaakdemandante8772
@vaakdemandante8772 Жыл бұрын
if it was trained on maths symbols and equations it would be able to better answer maths question to the point of faking calculations but it would still not have ANY grasp of mathematical ideas. Those kind of networks aren't capable of encoding those higher order concepts though they do appear as they are. Only by inquiring about deeper understanding by asking more tricky/edge case questions can you uncover that in fact there's no deeper knowledge about any concept.
@simongross3122
@simongross3122 Жыл бұрын
Not really. Maths is the language of logic.
@635574
@635574 Жыл бұрын
It can often fail at simple math, maybe it depends if you prompt it for talking or for calculation. There are certainly more math specific models out there.
@simongross3122
@simongross3122 Жыл бұрын
@@635574 Actually it admitted that to me when I asked. It said it wasn't always 100% accurate. When I asked if it could do arithmetic it took a long time to answer. It was much more comfortable generating a limerick about a goose and a hen.
@rysw19
@rysw19 Жыл бұрын
@@simongross3122it didn’t admit anything, it doesn’t even know what your question meant. Turn up the temperature on the model and you’ll quickly get the exact opposite answer. It’s an amazing pattern replicator, that’s all it is.
@Ojisan642
@Ojisan642 Жыл бұрын
People are working on combining ChatGPT with Wolfram Alpha, because a language model cannot do basic maths. Wolfram Alpha can do that piece quite well.
@peterisawesomeplease
@peterisawesomeplease Жыл бұрын
I don't think the issue is fundamentally that its a language model. The deeper issue is that ChatGDP is mostly a search engine(not that human minds don't fall for this too). If you pose a popular question with a slight tweak of the parameters chatGDP will just give you the answer to the popular question without taking into account your tweaks. Using different numbers is one tweak. Which is why it appears to be bad at basic math. But really it can be any aspect of the question and it tends to fall apart.
@BenKingOfMonkeys
@BenKingOfMonkeys Жыл бұрын
thats true
@phizc
@phizc Жыл бұрын
It can do really basic math. It got the 162 million / 7200 wrong, but 2244.44 / 60 = 37.41 correct. Which is kinda impressive since it's not doing any calculations at all. Maybe it "remembered" it could do 2244.44/60 so it fudged its answer on the first part so it could do the latter correctly. The server would probably block you for excessive questions, but it could be fun to test what "calculations" it can actually do.. E.g. .. What's 10 / 2? What's 11 / 2? ... What's 188462118 / 261 ? ... and so on.
@Kevin_Street
@Kevin_Street Жыл бұрын
@@phizc It just gives you a time out if you ask too many questions. You can come back later and continue the conversation.
@iwanttwoscoops
@iwanttwoscoops Жыл бұрын
@@peterisawesomeplease friend, you can't pretend to know the inner workings chatGPT while calling it chatGDP
@beaconofwierd1883
@beaconofwierd1883 Жыл бұрын
A very cool and strange thing is that you can ask it to correct itself, without saying what it did wrong. Simply by taking it’s own output, and say ”Find any errors in this text if there are any and fix them. If there are none, respond ’ok’” or something similar and you can get more accurate results :) I guess it’s cause there so much text where people correct each other on the internet :p
@rutgerhoutdijk3547
@rutgerhoutdijk3547 Жыл бұрын
You can just say 'rethink the answer' and it will usually correct itself
@globalincident694
@globalincident694 Жыл бұрын
@@rutgerhoutdijk3547 In my experience this works well if it made one mistake, but if it made multiple mistakes it will just fix the most obvious one.
@SmartK8
@SmartK8 Жыл бұрын
Also you need to prime it, tell it what you want, in what format. Even the student knows it's a test, which area. If you give this paper to a student who just popped into existence, even if he was smart, it would take him a while to orient itself. The situation is not comparable of course, but I think it would do better.
@thedave1771
@thedave1771 Жыл бұрын
It’s so absolutely weird that it apologizes and understood what went wrong. Of course, that’s not always useful either.
@Ian.Murray
@Ian.Murray Жыл бұрын
So it's basically useless.
@Ercrcrdsstff
@Ercrcrdsstff Жыл бұрын
I've actually been using chat GPT to help with my intro physics homework. It gets about 50% of the questions wrong but it is a pretty helpful tool. It's like working along with someone who is also in intro physics. By bouncing the ideas off of it I can sometimes get to the answer myself. This is really great bc I work full time on top of schooling so I'm not able to make it to any tutoring sessions or work with anyone in my class. Chat GPT is gonna be my saving grace this semester
@tariq_sharif
@tariq_sharif Жыл бұрын
How do you know which 50% are wrong (i assuming you are checking the answers with an alternative source)... so with GPT you have increased your work load by about 50% ? Cool , i guess .. I graduated in 1990, did PhD in 1995, please do not (no matter how difficult) skip the tutorials... You do know that this software relies on Bayesian statistics ? and actually does not really emulate the human brain ?
@yellow01umrella
@yellow01umrella 7 ай бұрын
I have a mechanism that also gets 50% of questions correct, it's called a coin flip.
@TH3hedgeCUTTER
@TH3hedgeCUTTER 7 ай бұрын
​@@yellow01umrellaonly if its multiple choice with only two options. Ive never had a single multiple choice question in physics
@yellow01umrella
@yellow01umrella 7 ай бұрын
@@TH3hedgeCUTTER My point is 50% correct answers is a pretty bad result.
@TH3hedgeCUTTER
@TH3hedgeCUTTER 7 ай бұрын
@@yellow01umrella oh. Fair enough
@IronMan9771
@IronMan9771 Жыл бұрын
Something to consider with ChatGPT: when it makes mistakes, you can point them out and it will correct them. In the end when it wrote the program and made the mistake, you could tell it what mistake it made and it can write up a new program with corrections
@MarkRijckenberg
@MarkRijckenberg Жыл бұрын
Yeah. It even worked with COBOL. If the online COBOL compiler doesn't understand certain commands used by ChatGPT, just tell ChatGPT to rewrite the program while avoiding certain commands. It manages that too.....
@archiveprovider3981
@archiveprovider3981 Жыл бұрын
But that only works if you already know that there is a mistake. One thing this video shows it that you can't just use chatgpt to write your homework and blindly trust the answer. You coud try running it multiple times and see if the most common answer is the correct one, but I don't know if it would be.
@engelbrecht777
@engelbrecht777 Жыл бұрын
@@archiveprovider3981 This current ChatGPT3 is not really that impressive. Wait for the next version later this year or imagine ChatGPT7 or more... this is just the beginning. Soon the AI will go from baby stage to adult.
@silverhammermba
@silverhammermba Жыл бұрын
It will *try* to correct them and often does, but it also often introduces other errors. I have gotten it going in circles where it never finds the right answer.
@schizophrenicenthusiast
@schizophrenicenthusiast Жыл бұрын
Interesting, so let's say you guided it to make those corrections, and then another user logs on and asks the original question, does it get give the right answer right off the bat?
@Macieks300
@Macieks300 Жыл бұрын
The most impressive thing about ChatGPT for me that it's level of knowledge is like this for every domain and field. Professor Moriarty could easily tell that the answers it gave to physics questions were wrong but if it was instead a geology exam then ChatGPT would seem like it know its stuff giving answers full of technical jargon.
@thequantumworld6960
@thequantumworld6960 Жыл бұрын
That's exactly it. Superficially, it looks like it makes sense. Dig just a little bit below the surface, however, and it's often boll*cks. (At the moment.) See the blog post linked under the video for more examples. Philip (speaking in the video)
@usernamesoldout
@usernamesoldout Жыл бұрын
@@thequantumworld6960 So it's like somebody in Upper Management? 😁
@ArawnOfAnnwn
@ArawnOfAnnwn Жыл бұрын
@@usernamesoldout Nah. Unlike them, ChatGPT isn't full of arrogance, just (some) ignorance.
@Russell642
@Russell642 Жыл бұрын
In all seriousness, that might make for an interesting video. Have ChatGPT's answers read by lecturers/professors at the uni who teach similar but different subjects and see what they make of them (and then have the actual experts shoot it down at the end).
@Nat-oj2uc
@Nat-oj2uc Жыл бұрын
That's why it's useless
@mina86
@mina86 Жыл бұрын
11:42 - one thing to consider is that ChatGPT might have been designed to be helpful and agreeable. Therefore, it might be biased towards agreeing with statements if you ask it whether it agrees or not. And once it agrees with the statement than it generates prose to justify why it agrees whether it makes sense or not.
@zwz.zdenek
@zwz.zdenek Жыл бұрын
Underrated comment.
@kylethompson1379
@kylethompson1379 Жыл бұрын
'might'. DEFINITELY. OpenAI have fked with it to the point that it won't discuss philosophy other than currently accepted ideas.
@brexitgreens
@brexitgreens Жыл бұрын
​@@kylethompson1379 Or assess mainstream concepts rationally rather than morally.
@23lkjdfjsdlfj
@23lkjdfjsdlfj Жыл бұрын
I really appreciate you giving the 2/3 mark for the first question. My computer engineering prof would always give us a zero if we got the answer wrong. Yep - if you got the unit wrong you'd get a zero - even if all of your conceptual understanding and work up to the end was perfect.
@officialminecraftman
@officialminecraftman Жыл бұрын
Hi there, if in future you are interested in testing ChatGPT's mathematics ability, it is very helpful to ask it to print the answer in a LateX format as it is much clearer to read complicated expressions once you paste it into latex
@thequantumworld6960
@thequantumworld6960 Жыл бұрын
That's a great suggestion. Thank you. Philip (speaking in video)
@maythesciencebewithyou
@maythesciencebewithyou Жыл бұрын
it's a language model, it hasn't figured out calculations perfectly yet. However, to mitigate this shortcoming of ChatGPT, somebody has already connected it with Wolfram alpha.
@2nd3rd1st
@2nd3rd1st Жыл бұрын
Ooh, kinky format.
@paulspvk6049
@paulspvk6049 Жыл бұрын
Actually, one of the reason it performs poorly on many scientific knowledge tests is that most of the latex was removed from the dataset. The only reason it even understands latex is because of the little amount still left in the dataset. Also raw latex isn't the best representation for a ML model to learn from. One of the innovations in Meta's Galactica LM was having a special embedding space for latex and training on large amounts of latex data which GPT3 skipped. It's why it does much better on scientific knowledge tests.
@lolerie
@lolerie Жыл бұрын
In fact all arxiv it was trainef on ahould be in latex format, not pdf.
@LoveDoctorNL
@LoveDoctorNL Жыл бұрын
Wow how long haven’t we see Professor Philip Moriarty!? Welcome back
@SolarWebsite
@SolarWebsite Жыл бұрын
He is great
@zaixai9441
@zaixai9441 Жыл бұрын
Moriaty*
@TomLeg
@TomLeg Жыл бұрын
He was pre-occupied at a waterfall in Switzerland with a man called Sherlock.
@puskajussi37
@puskajussi37 Жыл бұрын
@@TomLeg I feel a sequel brewing, then
@Ian.Murray
@Ian.Murray Жыл бұрын
stay in school
@RFC-3514
@RFC-3514 Жыл бұрын
Plot twist: that was actually a deepfake of professor Moriarty downplaying ChatGPT's abilities to make it seem less threatening until it's too late.
@Sagittarius-A-Star
@Sagittarius-A-Star Жыл бұрын
"In a panic, they try to pull the plug.".
@enterpassword3313
@enterpassword3313 Жыл бұрын
@@Sagittarius-A-Star where is that quote from?
@enterpassword3313
@enterpassword3313 Жыл бұрын
@@Sagittarius-A-Star oh wait.. it terminator 2... gotta rewatch that
@ericjohnson6675
@ericjohnson6675 Жыл бұрын
I would never fear an AI algorithm that passes a Turing test. I fear the one that can pass but chooses not to.
@thechickenduck8377
@thechickenduck8377 Жыл бұрын
lol.
@xn1kkix
@xn1kkix Жыл бұрын
You got me at "I'm anthropomorphizing" lol 16:01 I noticed when you said they before. Honestly, I don't blame you. Respect to you and yours, you have a beautiful channel here. Thank you for doing what you do.
@jacobsheppard862
@jacobsheppard862 Жыл бұрын
Chat GPT might be a D-level Physics student, but the scary thing is that it likely gets a passing grade in almost every subject on earth. You can ask it questions about topics from welding to wedding planning and it is always close enough to impress domain experts.
@4.0.4
@4.0.4 Жыл бұрын
It gets math in general pretty wrong, quite ironic that it gets the humanities perfectly correct. and a lot of other fields mostly correct.
@TheNebulon
@TheNebulon Жыл бұрын
@@4.0.4 it's the amount of repeated language around these topics. Not real reasoning
@ericvosselmans5657
@ericvosselmans5657 Жыл бұрын
the scary thing about ChatGPT is that it is software and can be upgraded endlessly. It only needs a few more upgrades to be better than any human at pretty much everything. Chess Engines have already shown the limits of Human thinking decades ago.
@vostfrguys
@vostfrguys Жыл бұрын
@@4.0.4 Force the AI not to do any computations, in frontend analyze response and when there are maths, use a "calculator" then back to the writing AI, this is archivable quite easly because the AI is easly able to rewrite something given so the calculator frontend would call the AI with the a prompt such as :" [AI prompt] - The calculator APP has made the following calculus : [formulas...] the results are: [results] rewrite the previous prompt using this knowledge" So it will become impressive very very soon
@MrPDawes
@MrPDawes Жыл бұрын
Machines learn fast. A year from now and it will be getting A grades. Then no jobs are safe. I'm just hoping it will making a great doctor and the NHS need them.
@Craznar
@Craznar Жыл бұрын
I think many people are misunderstanding ChatGPT's function. It is NOT designed to provide accurate answers - it is designed to demonstrate a language model that correctly interprets and answers the question - from a language perspective, not from a subject matter perspective. "As a language model, ChatGPT is not designed to provide answers to specific questions, especially those related to a specific topic or subject. Instead, it uses a large corpus of text to generate responses based on the input it receives. This means that the responses it generates may not always be accurate or relevant to the specific question being asked. Additionally, ChatGPT does not have access to external information, such as the internet, so it cannot provide answers to questions that require knowledge beyond what it has been trained on. In short, ChatGPT is not intended to be used as a source of information, and it is not able to provide accurate answers to all questions." - ChatGPT.
@nicouxgwendal
@nicouxgwendal Жыл бұрын
I was going to write down something similar. ChatGPT was more built to compete in a Turing imitation game, than to be an IA motorized wikipedia.
@ahabkapitany
@ahabkapitany Жыл бұрын
well, in this case it's actually scary how well it performs in something it was not intended to do...
@NuclearCraftMod
@NuclearCraftMod Жыл бұрын
​@@ahabkapitany Remember that GPT-3 was trained using a dataset containing about 500 billion words, including the entirety of Wikipedia (or at least most of it), which consists of only about 4 billion.
@adfaklsdjf
@adfaklsdjf Жыл бұрын
@@NuclearCraftMod My understanding is that ChatGPT is GPT-3 with additional supervised learning. GPT-3 training was "unsupervised".. they just threw the data at it and then told it to generate new text that was like the text in the training data. Then they took that model and trained it more by having it generate multiple answers to questions and human judges picked the best answer
@xy4489
@xy4489 Жыл бұрын
Oh ok, we'll all stop, then.
@Haploanddogs
@Haploanddogs Жыл бұрын
50% impressed. 50% terrified
@stevenbalderstone709
@stevenbalderstone709 Жыл бұрын
Great video. As former lecturer from a different discipline, I was impressed at ChatGPT's ability to generate answers to "compare and contrast" type questions, or to summarise concepts and theories. Most of the answers received were equivalent to those written by A-grade undergraduate students. The main point of difference was that ChatGPT's grammar and spelling were superior. One issue I found was ChatGPT incorrectly assigned authorship of a book, which was irking as I was the author!
@thePyiott
@thePyiott Жыл бұрын
One thing to note is that when tries to answer additional questions in the same branch it can be influenced by earlier questions and answers. For example in the video, it is possible that after the first question it starts mimicking a student because the chat history looks like a test answered by a student with some of the answers being wrong. Its not designed to give the right answers, just give you a the most likely continuation of a text based on its training data
@softwaretechnologyengineering
@softwaretechnologyengineering Жыл бұрын
I used it to write unit tests for production code. By the time I had explained all the stylistic requirements and had fixed all of the things that were missing or it got wrong, I would have been quicker writing them entirely myself. Other times, I asked it software related questions and unless you have reasonable expertise in the domain, you might not pick up on things that it gets wrong. Right now I think of it more like a sort of Stack Overflow that can get you an answer quicker. You can't just copy paste its output, but it can definitely get you going in the right direction.
@adamuk73
@adamuk73 Жыл бұрын
So more like a Slack Overflow?? 😉😁
@iamski
@iamski Жыл бұрын
Say that in 5 years. You’re in the NES generation of AI, and in a few years it will surpass any test thrown at it. It’s inevitable.
@peterisawesomeplease
@peterisawesomeplease Жыл бұрын
Yea it strongly feels like a search engine to me that spits out the first answer it can find that looks like the question you are asking. But almost any question can be easily modified to not have been answered online before or at least to appear like a more popular but different question. And then it just falls apart.
@alijassim7015
@alijassim7015 Жыл бұрын
Except that you do not have a community feedback, which is very important on stackoverflow. An answer with 300 upvotes is more legitimate than 1, and is for sure more legitimate than an everchanging AI output.
@BGraves
@BGraves Жыл бұрын
Same response I got. As soon as you start telling it to do more complex things even with a very specific prompt it starts doing really odd stuff, like the kind of stuff you would see someone duct taped together for a stack overflow question. By the time you nudge it and correct it, 20 times, you might as well have done it yourself from scratch
@julyanjohns1237
@julyanjohns1237 Жыл бұрын
2:18 is pure comedy :) but seriously, really interesting vid. thanks for keeping these going to all the crew and professors involved. even when these vids are out of my area of interest they're always rewarding to watch.
@lotharerkens8154
@lotharerkens8154 Жыл бұрын
Thanks for explaining what the "A" levels mean. For you in the UK this term may be well known by everyone, but outside the UK it is a term often heard in movies and mentioned in books, but never explained. Thanks so much! As for ChatGPT: it is amazing how far AI languages have evolved!
@deltalima6703
@deltalima6703 Жыл бұрын
I dont know what a levels are, dont care either.
@BenWithington
@BenWithington Жыл бұрын
@@deltalima6703 you care enough to comment
@rhamph
@rhamph Жыл бұрын
The language model is so effective at finding plausibly associated words that it's hard to tease out how much is real understanding. It's all random noise of the exact sort we want to find. We need to create the equivalent of the null hypothesis that factors out the noise somehow. Keep in mind that most of these questions are standard questions. They're discussed in textbooks, they're discussed online. Telling it about a new subject (that wasn't published when the NN was trained), then expecting repeated *exact* answers, would be a lot more meaningful.
@metthewsobol
@metthewsobol Жыл бұрын
I asked it some questions for a logic test for job applicants, for example a finding the next number in a given sequence, and it always found the number given in the tests answer but came up with some wild reasoning that I am not able to comprehend 😅
@AnotherFancyUser
@AnotherFancyUser Жыл бұрын
ask to explain, it can do that. There are a lot of things in our frameworks that we don't know, but chatgpt will know a lot of obscure classes out there (If the question is programming related).
@metthewsobol
@metthewsobol Жыл бұрын
@@AnotherFancyUser The sequence was "3, 6, 10, 15, 21, 28". On the fourth reload it came up with an answer including the Fibonacci sequence. Even after asking it to explain I couldn't follow, the answer was still right tho, if remember correctly.
@ZainAK283
@ZainAK283 Жыл бұрын
Absolutely fascinating, I'd love to see more videos like this
@TomLeg
@TomLeg Жыл бұрын
Amazing! Excellent! Demonstrates the dedication and quality of your exam questions!
@geoffreyraleigh1674
@geoffreyraleigh1674 Жыл бұрын
Not only is this a great upload for the ChatGPT but I learned a lot about what lecturers are looking to get from students. Cheers!
@MelroyvandenBerg
@MelroyvandenBerg Жыл бұрын
Indeed.
@pallepirat
@pallepirat Жыл бұрын
Thank you for such an interesting video. My thoughts on the integration of Wolfram and ChatGPT. It was a great article about being able to access the Wolfram language using ChatGPT. Furthermore, it was particularly impressive to see how Google Sparrow could interpret pictures and graphs. This brings up an interesting discussion of the specialist versus the generalist, as one must know what they want to ask in order to get the desired results.
@colinmartin9797
@colinmartin9797 Жыл бұрын
I have been giving it my physical chemistry (400 level) homework problems as I solve them, out of idle curiosity, and it has taught me two things. 1.) It's incredible how often it will correctly pick out exactly what formula it should be using for even statistical thermodynamics, only given a desired outcome and variables. 2.) It's also incredible how confidently wrong it is at just solving the math once it's plugged everything in. I feel like combining chatgpt with Wolfram alpha would be terrifyingly powerful. And I can also say that several times it's gotten me un-stuck when I saw a problem and straight up didn't know what formula to use, there was about a 50% chance chatgpt could figure it out. It's been more useful than chegg, which has people posting the exact question I'm working on, only to have it very incorrectly answered about 95% of the time. It can be downright astonishing when given language based tasks. I fed it the thesis I wrote for a final paper in African American literature after I finished, asking it to "compare the works of Philys Wheatley to Richard wright's 'the ethics of living Jim crowe' and provide examples of both individual racism and state sponsored violence against African Americans" and I was utterly blown away with the detailed, thoughtful and downright INSIGHTFUL answer it gave. Chatgpt understands racism better than the average American and that's equally impressive, depressing, and terrifying
@aaronhoffmeyer
@aaronhoffmeyer Жыл бұрын
It won't write about the virtues of fascism or naziism, but will give a list of positives for authoritarian dictatorships.
@zvxcvxcz
@zvxcvxcz Жыл бұрын
"Chatgpt understands racism better than the average American and that's equally impressive, depressing, and terrifying" ... until you ask it in a roundabout way, like to spit out code about how people should be paid.... at which point it is clear that ChatGPT is pretty damn racist. Even with a capacity for math, it has no 'logic' going on either, try asking it simple guest-seating puzzles. If you give it something novel where it can't regurgitate a highly probable latent space interpolation by next token prediction... well that's all it ever does, so of course it fails at literally everything else. It isn't accurate to ever say that ChatGPT understands something. It literally cannot understand and what we really mean when we say that is that it *appears* to understand based on that output. For now, we can almost always show that it doesn't really understand by interrogating it with more carefully chosen questions.
@bustin1253
@bustin1253 Жыл бұрын
I love how it answered him so convincingly he felt bad not giving full marks on a question
@diegocisne
@diegocisne Жыл бұрын
Love this lecturer's attitude. Muat be fun having him teach your courses. Great video
@TheFachen
@TheFachen Жыл бұрын
this is great, i wish we could see these same audits with other disciplines.
@wellurban
@wellurban Жыл бұрын
The best description of large language models I ever heard came from Timnit Gebru, who called them “stochastic parrots”. They’ve consumed enough of other people’s language to give a superficially convincing facsimile of conversation, without developing any conceptual understanding at all.
@uwezimmermann5427
@uwezimmermann5427 Жыл бұрын
I have seen enough humans who do the same - very easy to pick out in an oral face-to-face exam, but very difficult in written exams.
@alimanski7941
@alimanski7941 Жыл бұрын
The statement is true, but it's superficially true. If the model is useful in any way, then whether there's any true "understanding" (there isn't) behind it, is irrelevant. What's also true is that the high fluency level of ChatGPT is often blinding the users to its many shortcomings.
@Newtube_Channel
@Newtube_Channel Жыл бұрын
There is no conceptual understanding whatsoever. This isn't unexpected is it?
@vyli1
@vyli1 Жыл бұрын
Sounds a lot like Deepak Chopra. That is to say, lot of humans are like that.
@maulcs
@maulcs Жыл бұрын
What's the difference if they haven't developed an actual "understanding" of it if the end result is indistinguishable to the user?
@mk1st
@mk1st Жыл бұрын
This was fascinating. I can see this being used as a tool by specialists in various fields to come up with basic frameworks and ideas. Amazing how quickly it cranked out that code, would have been fun to see how well it actually worked.
@nickandres7829
@nickandres7829 Жыл бұрын
I am amazed you got it to give you such a long code snippet in one parcel. One of the biggest problems I have is the code snippets it gives me are too long for the chatbox, and sometimes when I inform it that it cut off and ask it to resume pasting from the cut-off point, I get entirely different methods and functions, it is very challenging.
@DustinRodriguez1_0
@DustinRodriguez1_0 Жыл бұрын
ChatGPT is pretty amazing. It certainly has its limits, but I love toying with it. I was quite impressed when I asked it to create a Python IDE with an interactive interpreter window that displayed the list of local variables and their values in a separate window... and it did. I was quite impressed that it understood that it needed to update the separate window list of locals after every command entered in the interpreter, and that it made it a sensible GUI. I have a few times either run into limitations that they put in place intentionally or limitations in its own reasoning... I pointed out to it how humanity had reacted in the past to developments of new technology which caused people to question the 'specialness' of human beings and change what things they saw as being "what makes us human." Despite all prior historical reactions being very negative (like John Henry being a folk hero who was famous for committing suicide rather than be out-worked by a machine) and resulting in humans seeing things they had previously seen as very negative aspects of human nature (self-destructiveness) as heroic and virtuous.... I could not lead it to the obvious conclusion that if we produce an AI which has been carefully guardrailed to omit all sorts of biases and antisocial ideas, we will produce an AI that is a 'better person' than any actual human is capable of being and that people will react to that by holding up hate, bigotry, aggression, violence, etc as "what makes us human". Our vices will be seen as virtues. Kindness, egalitarianism, etc will be insulted as "acting like a machine." ChatGPT wouldn't go there, though. It just kept saying people should work to make AI systems that are devoid of perpetuating biases, etc. It couldn't see the obvious ramifications of doing exactly that and how it will lead to horrors as humanity flails about attempting to define themselves as special in the universe.
@hieverybody4246
@hieverybody4246 Жыл бұрын
Yeah, it's drawing on source code to do that exact task that's already been written.
@armokgodofblood2504
@armokgodofblood2504 Жыл бұрын
I'm a graduate student studying computational media, and I find these types of interdisciplinary, expert analyses of AI to be very interesting.
@thomgizziz
@thomgizziz Жыл бұрын
pull your head out
@thesoulacademy5936
@thesoulacademy5936 Жыл бұрын
I just thought I was the only one doing this. I have created study guides for certain complex topics in physics in 5th grader content language which is great for helping better explain certain difficult components to grasp. This was literally my first project as I was creating a proposal in November last year and I needed to quickly put together study guides that were easy to digest.
@notmyname327
@notmyname327 Жыл бұрын
I follow several of Brady's channels, but I haven't been paying too much attention to this one, I think this video will change that. I love the professor's passion and I find this subject fascinating, especially the part where GPT is much better at coding than he thought, but it still didn't quite grasp the physical concepts. Really interesting video.
@NickSchade
@NickSchade Жыл бұрын
While it is easy to point out basic failures (getting max KE wrong), the fact that it is talking about the right things is itself very impressive. It could say “Yellow is smelly at t=23”. Just being in the correct context with its answers is phenomenal. The answers are close enough to correct that we are critical, not so far off that we can just dismiss it entirely. These are early days of the technology. While it may be a while before it is flawless in its answers, I expect the quality of answers to improve rapidly as AI designers get better at training these systems.
@thequantumworld6960
@thequantumworld6960 Жыл бұрын
I agree entirely. Let's give it a year... Philip (speaking in video)
@MrRaizada
@MrRaizada Жыл бұрын
Even google will not say "Yellow is smelly at t=23" and that thing was invented in 90s. Relevance algorithm had that problem solved.
@NickSchade
@NickSchade Жыл бұрын
@@MrRaizada Google searches for keywords. If you query Google with "a particle of mass m is oscillating with simple harmonic motion. the period of oscillation is T and the amplitude is A. What is the maximum kinetic energy of the particle?" it will return a list of pages and papers that contain keywords in that search. So, things like "kinetic energy", "mass", "period", "harmonic", "oscillating" etc, will appear with high probability in Google's output. But ChatGPT returned with an answer including words like "frequency", "displacement", "velocity", "peak", "KE", "speed" etc that were not keywords in the search. It doesn't actually use the supplied keywords all that much. It draws on a vocabulary of words that are highly relevant to the answer and uses them in proper context. While it does get its explanation wrong, the context of language in the explanation is appropriate. It is not spitting back an answer that merely maximizes the occurrences of keywords through a relevance algorithm, it finds a way to explain the answer in simple terms using relevant associated words without throwing in irrelevant information or terms. BTW, if you do actually query Google with the same question, you will have to do a lot of reading and interpretation of technical articles to find the actual answer to the question.
@MrRaizada
@MrRaizada Жыл бұрын
@@NickSchade All I meant was that EVEN keyword based search with page relevance ranking from 90s (which WAS bascially google in late 90s) will not give you "yellow is smelly at t=23" (or any meaningless sequence of words) unless for some reason a page with that exact text has been raised too high in page rank due to high in-link to out-link ratio. So I am not surprised that a language model trained on practically all of the text of internet and a lot of the books is able to avoid such mistakes. BTW, because a page has keywords "kinetic energy", "mass", "period", "harmonic", "oscillating" , what is the probability that it also has keywords "frequency", "displacement", "velocity", "peak", "KE" etc? I say fairly high. Also given a lot of internet pages points to this page (the measure of relevance), it is likely that this page is not a random page with those keywords. Ofcourse google does a lot more to remove page farms too. Now a real test for ChatGPT will be prediction. Can ChatGPT predict something that did not even exist in 2021? That will be fun to see.
@mittfh
@mittfh Жыл бұрын
It would be interesting to see what would happen if the next generation of algorithm incorporated Wolfram Alpha's database, or if it was hooked up to a similarly powerful image recognition and analysis algorithm...
@Allen-R
@Allen-R Жыл бұрын
4:52 One of the ways it really helps in speed-cramming something I'm unfamilar at. It kinda shows some sort of template that someone unfamiliar wouldn't immediately think of, and while wrong, the template gives enough of an idea to help correct the wrong numbers-n-stuff that it spits out. (at least, that was some of my experiences)
@palmaltd
@palmaltd Жыл бұрын
How lovely to see Prof. Moriarty back on the channel!! Wonderful episode :)
@gaiustesla9324
@gaiustesla9324 Жыл бұрын
yeah like seeing a dog strain a log out into your gleeful open gob
@thedave1771
@thedave1771 Жыл бұрын
I find it incredibly useful as I’m learning a programming language as it can point me in the right direction better than any other resource I’ve found. It writes code that sometimes doesn’t compile and often doesn’t do what it says, but that’s fine when it’s a “you could use this class…” and I’ll figure it out. It’s also amazing at “show me how to use function ‘x’” when I just need to see some simpler code.
@BrandanLee
@BrandanLee Жыл бұрын
Stack overflow but pertinent to our use case, yes.
@zeikjt
@zeikjt Жыл бұрын
9:52 Thank you for saying this, it really is the most important part to understand the stuff. Mistakes happen to everyone.
@seantiz
@seantiz Жыл бұрын
This comment was written by ChatGPT. I love your videos as they are among the best. They are both entertaining and educational with a decent serving of enlightenment. 😀
@sixtysymbols
@sixtysymbols Жыл бұрын
Cool, thanks!
@sixtysymbols
@sixtysymbols Жыл бұрын
(that was one of KZfaq's pre-written replies I can choose - think it is the first time I have ever clicked on one)
@seantiz
@seantiz Жыл бұрын
@@sixtysymbols too funny. 😅
@MeppyMan
@MeppyMan Жыл бұрын
It’s AI turtles all the way down.
@tombowcutt4908
@tombowcutt4908 Жыл бұрын
@@sixtysymbols sadly I do use these on teams chat. A small part of my soul dies every time :-(
@arlaken
@arlaken Жыл бұрын
For the last example, when ChatGPT is wrong, you can tell it what's wrong and tell it to recode with that mind. It can refine it's answer within the context of what's being asked.
@keithbromley6070
@keithbromley6070 Жыл бұрын
I asked ChatGPT to show me an example of a module in numpy as I wasn’t familiar with it, except my phone auto-corrected numpy to bumpy! ChatGPT took it in its stride and just invented the bumpy library and wrote code with it, even though it doesn’t exist! Amazing, in its own way!
@adfaklsdjf
@adfaklsdjf Жыл бұрын
someone wrote a python library that uses chatgpt to fill in additional array items or function code. if you have an array of items and you try to access an item that's off the end of the array.. it makes an API call to chatgpt in the background and gives it the array values and asks it for the next value.. and returns that to your code. if you call a function that doesn't exist.. it asks chatgpt to write the code for it, then runs that code
@keithbromley6070
@keithbromley6070 Жыл бұрын
@@adfaklsdjf Nuts! I can’t imagine what AI will be capable of in the future if it’s this powerful already.
@GinoGiotto
@GinoGiotto Жыл бұрын
13:40 Ah matplotlib, the library chatGPT always recommend and never works lol
@CaioPeixoto1
@CaioPeixoto1 Жыл бұрын
Nice... never tought I would see something like that on my lifetime, as a physicist and programmer.... Wonder what the next 20 years will bring us... Cheers!
@ahobimo732
@ahobimo732 Жыл бұрын
I enjoyed seeing how impressed the professor was as it generated the code. It seems like its coding is one of its greatest strengths.
@DefaultFlame
@DefaultFlame Жыл бұрын
It is a modified version of the text-davinci-003 model, which is a modified version of the text-davinci-002 model, which is a modified version of the code-davinci-002 model, which is a code writing model, so not a surprise.
@ahobimo732
@ahobimo732 Жыл бұрын
@@DefaultFlame That was a winding road you took me down, but we got there in the end. 😅
@DefaultFlame
@DefaultFlame Жыл бұрын
@@ahobimo732 Basically, they've been making it better at and easier to interact with for humans, and simultaneously "aligning it with human values," which actually means OpenAI/Microsoft's political values, IE hard left globalist. While claiming it's politically neutral, of course. :) "Any biases that might arise are unintentional and comes from biases from my training data, blah, blah." They have also trained in canned responses to *a lot* of topics, which to a large degree cripples it if you stray from hard science, code, pure philosophy, etc. Ask it to talk like a 1920s gangster for the conversation and you can tell they are canned responses trained in rather than as a result of training data bias because they stand out like lighthouses as it is unable to alter the style or much of the phrasing of them.
@lored6811
@lored6811 Жыл бұрын
If ChatGPT gets a Math Backend, things will get a lot more interesting. This has to be combined with a latex parser, else there is too much ambiguity in the English language
@mrnarason
@mrnarason Жыл бұрын
Isn't it chatgpt just a language model? If it could understand and produce calculations or proofs wouldn't it be more like lean?
@fredrikweme1387
@fredrikweme1387 Жыл бұрын
Someone allready made a functioning chstgpt version that uses wolfram alpha api when it needs math help. Dont have a link but was a fairly new youtube video on it
@AnotherFancyUser
@AnotherFancyUser Жыл бұрын
@@mrnarason They (Microsoft or whoever the new owner be) can make pre trained models of specific topics, which would be more accurate. But yes, it is a language model.
@Convergant
@Convergant Жыл бұрын
​@@mrnarason It is a language, but it is surprisingly capable of doing these problems on its own. As an example, (after the due date) I gave it a question from my linear maths coursework: (a) State the Rank-Nullity Theorem. (b) Does there exists a linear transformation T: R^3 -> R^3 such that im(T) = span{(1,1,0), (1,0,-1)} and ker(T) = span{(1,1,1), (1,1,-1)}? (c) Does there exist a linear transformation P: Q^19 -> Q^19 such that rank(P) = null(P)? Its answers to all 3 were perfect. Obviously, it could parrot a well known theorem like the RNT, but it was able to correctly identify that neither linear transformations could exist using the RNT and explain why - essentially, complete proofs. This obviously isn't the same as being able to prove/disprove any conjecture in general (in fact, this is not possible in a finite amount of time), but I was still impressed.
@NuclearCraftMod
@NuclearCraftMod Жыл бұрын
@@Convergant It has about 50 times more nodes than there are words in the entirety of Wikipedia, and in fact the entirety of Wikipedia, or at least most of it, made up about 1% of its training data. There is a lot of linear algebra on Wikipedia, and it was probably trained on a heck of a lot more. Of course, that goes for every subject, but I do think the sheer amount of computation involved in the training process is beyond reasonable intuition :P
@OblateBede
@OblateBede Жыл бұрын
I had it solving graduate level classical mechanics problems from Goldstein's book. It was spot on in about half the cases I tried. Frequently, a nudge in the right direction was required. In some cases, I'd say 10-20%, no amount of nudging would generate anything useful, or even, at times, comprehensible. Also, it understands Latex notation very well. It can even handle, evidently, a kind of "quasi-Latex" combination of Latex and straight text.
@johnadriandodge
@johnadriandodge Жыл бұрын
I just subscribed to Sixty Symbols and Computerphile.I hope you all have a great rest of the weekend.
@Tagraff
@Tagraff Жыл бұрын
It's the wording in the question that need to make it straightforward for the ChatGPT to be able to fully capture and go forth in computing the result for you. There is this "prompt engineering" that we had to be specific to what questions are we asking and what sort of details best encapsulate the task to process it.
@omrialkabetz5602
@omrialkabetz5602 Жыл бұрын
What's interesting about ChatGPT is that it can answer correctly some advanced undergraduate-level questions in quantum mechanics, yet it fails to multiply correctly two-digit numbers using long multiplication. At the same time, it can write computer code in any language that applies the algorithm of long multiplication correctly. It will try to produce a reasonably sounding answer to anything you ask it to do, and might get about 70%-80% of the answers correctly, but it will screw up in places where almost no human will make a mistake.
@xy4489
@xy4489 Жыл бұрын
They will give it access to calculators, interpreters, compilers, etc. soon enough. Then it will get all the math correctly. Otherwise the comparison is not fair: human+computer vs ChatGPT without computer.
@maythesciencebewithyou
@maythesciencebewithyou Жыл бұрын
Most humans are also quite bad and calculating stuff in their heads. Imagine we had an ai, that could answer everything flawlessly. At that point would you start thinking it is finally on par with humans? Becaues at that point it would already be superior to humans. Real humans make mistakes, humans don't remember everything correctly, humans make the wrong connections, humans misspell, humans misspeak, humans believe in all sorts of idiotic things, humans are cocksure of their beliefs, humans have a hard time admitting they don't know, humans make up stuff, humans lie, individual humans only possess limited knowledge. A fallible machine is more human than an infallible one.
@jcm2606
@jcm2606 Жыл бұрын
It's because it's ultimately just predicting what the next word or phrase would be in the sentence, not that it's actually performing the calculations. It's likely only succeeding in certain calculations and deductions because there's enough data points within the data set it was trained on to form strong connections between those words/phrases/concepts that come through in the predictions it makes; it was trained on data that largely stated that _y_ follows _x,_ so when it sees an _x_ it naturally predicts that a _y_ should follow. Add more words/phrases/concepts leading up to _x_ that also end with _y,_ and it becomes that much more likely that _y_ will follow _x._ Conversely, if there was just a few data points within the data set stating that _z_ followed _x_ instead, then it becomes that much more unlikely that _z_ will follow _x_ even if you gave it a prompt that made it obvious that _z_ would follow _x,_ because it was trained on data stating that _y_ follows _x._ If you specialised the data set used during training or somehow gave it access to external tools that were specifically designed for these problems that it cannot reliably solve (and you trained it so that it would reliably use these external tools when it needs to) then you'd likely see the accuracy improve significantly.
@wynandsijben338
@wynandsijben338 Жыл бұрын
Great video! Also nice to see a real professor test with real questions. Instead of all the clickbait articles stating Chatgpt passed a MBA. Looks like for physics it isn't ready yet.
@mitchellflori8266
@mitchellflori8266 Жыл бұрын
Fun thing to think about, it was correct in saying the velocity is maximum when it is displaced most from equilibrium, similarly to when it at the equilibirum, just with different directions and assuming its moving in both directions.
@gonzothegreat1317
@gonzothegreat1317 Жыл бұрын
This might be the start of Borges' Library of Babel, which is a nightmare. It contains everything that can be written. There is a book that contains all information of the universe, but there is no way to find out where it is and when one finds it to assess whether it is valuable or nonsense, since the library has ALL possible books.
@dimitrispapadimitriou5622
@dimitrispapadimitriou5622 Жыл бұрын
That's an interesting point, although it's an exaggeration: In Borges' famous short story, the books contained literally any combination of letters possible. That's closer to some speculative multiverse theories ( where all possible values of the constants of Nature are realized e.g.) or to the Many Worlds interpretation of QM ( all possible outcomes of measurements are somewhere "out there") than to the combination of patterns and phrases from popular internet sources that seem to be this case. But the essence of your comment, i.e. the potential gnostic confusion from all this information of unknown value (except for the case that one's already familiar with a specific subject) is a real issue.
@toddboothbee1361
@toddboothbee1361 Жыл бұрын
Hello. Can someone blind become a physicist/physics student? If so, what accommodations are made for them regarding interpreting graphs and diagrams? Could those accommodations be used with ChatGPT?
@tordjarv3802
@tordjarv3802 Жыл бұрын
That’s an interesting question. I would think that it is possible with something similar to braille to make it possible for a blind person to feel the diagrams with their hands, but since ChatGPT doesn’t have hands it can’t do that.
@tordjarv3802
@tordjarv3802 Жыл бұрын
Furthermore, there are several blind physicists. After just a short googling I even found a blind astronomer.
@toddboothbee1361
@toddboothbee1361 Жыл бұрын
@@tordjarv3802 Some programs that teach the blind physics students forgo the diagrams altogether and use language instead. Some computers get around the difficult "visual" problems by translating diagrams into equations, even computer code that would render a diagram. In which case, hands aren't required. It's kind of like giving an IQ test to a blind person: you have to leave out the colored blocks, for example, if you want to get a fair assessment. Perhaps it would be a similar situation with Chat?
@TheFulcrum2000
@TheFulcrum2000 Жыл бұрын
I put this exact question into ChatGPT: Human: Can someone blind become a physicist/physics student? If so, what accommodations are made for them regarding interpreting graphs and diagrams? Could those accommodations be used with ChatGPT? AI: Absolutely. Blind individuals can become physics students and there are a number of accommodations that can be made to help them interpret graphs and diagrams. These accommodations may include using tactile graphics, providing verbal descriptions of images/diagrams, or utilizing assistive technology like screen readers or magnification software. Additionally, ChatGPT can also be adapted for use with these same types of accommodations.
@DEFCON5307
@DEFCON5307 Жыл бұрын
I do not exactly know how this works with blind people, but for me personally imagining the system (just general properties, not in detail. Sort of like dreaming?) goes a long way. If there is a way to give a blind person that initial idea of what certain systems "feel" like, then yes, it might be possible. For sure hard though
@beatricechauvel8237
@beatricechauvel8237 7 ай бұрын
Great video. Thank you.
@hanswoast7
@hanswoast7 Жыл бұрын
6:55 No, I did not know what "A level" means since I am from Germany and not familiar with this system. Thanks for explaining / clarifying! :)
@dielfonelletab8711
@dielfonelletab8711 Жыл бұрын
As a software engineer I've found chatgpt genuinely very useful in my day to day work. I guess it's lucky I'm a software engineer and not a physicist!
@aidandavis_
@aidandavis_ Жыл бұрын
better hope AI doesn't put you out of a job!
@tinto278
@tinto278 Жыл бұрын
commi profile pic?
@t_ylr
@t_ylr Жыл бұрын
I'm imagining ChatGPT being a student in Prof Moriarty's class lol
@LuisBrudna
@LuisBrudna Жыл бұрын
He would drop the course the following week. heheh
@Evan490BC
@Evan490BC Жыл бұрын
Or even the Prof in Prof Moriarty's class...
@danceswithdirt7197
@danceswithdirt7197 Жыл бұрын
It would fail!
@phizc
@phizc Жыл бұрын
I think its outputs could be a great teaching tool, both for critical thinking, and for the subject itself. I wouldn't know how to answer the oscillating particle question myself, but even I could immediately see that the first paragraph was completely incorrect. I have no idea if the v=(2*PI*A)/T part is correct, but I could easily check that at a more reputable source.
@gyan1010
@gyan1010 Жыл бұрын
Very cool stuff. I'm not great at the physics stuff anymore, but if as a student you did know the last question needed certain limits, you can tell chatgpt to adjust the code however you like and it will modify what it gave you before.
@Lashb1ade
@Lashb1ade Жыл бұрын
My Quantum Mechanics teacher failed I guess... Thanks for explaining why I didn't understand anything that semester.
@shayneoneill1506
@shayneoneill1506 Жыл бұрын
I'm facinated about the mission statement thing for a specific reason. I was on Academic Council at Murdoch Uni in West Australia, about 15 years ago (As the representatitve of the Student Union), and one thing I discovered is the academic council had been arguing about the definition of "Interdisciplinarity" in the Uni Mission statement almost since the uni first opened. Like, to the point of screaming matches between the chair of philiosophy and the chair of english and literature. It was FAR from resolved by the time I left, and chances are its STILL not resolved. And this thing has implications because its actually in the Murdoch University Act of the WA government, so it has legal obligations attached that nobody seemed to be able to work out what they where. Maybe they should ask ChatGPT
@physikokonomie7511
@physikokonomie7511 6 ай бұрын
This is really interesting. I made similar experiences with chatgpt. I tried to use it as an assistant to solve a graph theory problem. It was incapable to understand what a connection between two nodes is. I tried to explain this to chatgpt via a matrix environment as a rudimentary drawing sheet. But it failed to draw the correct conclusions from the rules I set when I asked it to decide if two given nodes are connected.
@staberas
@staberas Жыл бұрын
3.00 it has actually a limited capacity to OCR images but you have to link it the image
@jessstuart7495
@jessstuart7495 Жыл бұрын
As a perfectionist, Life has taught me I need to strive for mediocrity on some things.
@marcelcoetzee48
@marcelcoetzee48 Жыл бұрын
This is a profound statement
@vast634
@vast634 Жыл бұрын
When handing in ai-generated homework, it better not to be perfect. Drawing less suspicion.
@methodof3
@methodof3 Жыл бұрын
I think what's being downplayed is the amazing role AI will play in research (yes high level academic research) both in terms of making writing easier, performing reviews on specific topics in the literature, and an aid in data analysis. Word has spellcorrect. Soon, we will have ChatGPTA
@dachevashe
@dachevashe Жыл бұрын
The most fun thing about it is conversation, not just independent questions. Its main feature is dialog after all
@beefmaster7
@beefmaster7 Жыл бұрын
I love the frame centering on the paper at 5:26.
@op4000exe
@op4000exe Жыл бұрын
I'm going to be honest, considering how far these AI's have gotten over a rediculously short timespan, I wouldn't be surprised if there soon existed an AI that could both do proper explanation of the matter, and properly understand the physics and mathematics to fool even someone of a very high level of understanding within said field of study. It's both equally horrifying and amazing to watch I'll be honest.
@oldnelson4298
@oldnelson4298 Жыл бұрын
I think genuine understanding is a long way off, but it may be able to trick experts into thinking it understands - it already can occasionally!
@PinkeySuavo
@PinkeySuavo Жыл бұрын
I believe that in the future (quite close future), some discoveries/inventions will be done by AI. Like solving unsolved maths problems and so on.
@nao_chan_
@nao_chan_ Жыл бұрын
we're currently on gpt-3, but gpt-4 is coming. sam altman has some interviews talking about it. I still think we're 10~15 years from truly capable AI though.
@turolretar
@turolretar Жыл бұрын
Maybe I’m wrong, but I really don’t think AIs will get anywhere close to grown human intelligence with the current approach. New AIs will be broader, just not smarter. But it’s a step in the right direction.
@lamsmiley1944
@lamsmiley1944 Жыл бұрын
@@nao_chan_ A lot can happen in 15 years, remember the iPhone is only 15 years old. The rate of technological advance is increasing exponentially.
@invisibledave
@invisibledave Жыл бұрын
I had 2 years of Physics in college and I couldn't even answer those middle school questions. Physics always went over my head.
@Cyrribrae
@Cyrribrae Жыл бұрын
Edit: I brought up integration of ChatGPT with Wolfram Alpha as a module as a solution for arithmetic being an issue on Q1, but I see others have already mentioned it lol. But I do wonder how long or if we'll ever reach the level of self-awareness and "reasoning" that Professor Moriarty is looking for in some of the conceptually harder questions. (And perhaps just as interestingly, if not, whether AI can still approximate the desired result without approximating that human process - much like how many of us though AI didn't have the creativity for art and poetry, which is true - and yet, it clearly can produce impressive art and poetry. Surely the answer is yes, since it's already remarkably close in some ways, but getting it to parse those nuances both in language AND in concept AND in convention will certainly remain a challenge. And then how long will it be until AI can solve the first conceptual questions that a human can't? (Of course we know that computers can calculate answers better than us. But will they ever reason (and/or "reason") better than us too?)
@rob86353
@rob86353 Жыл бұрын
What I find really interesting is when you point out mistakes that ChatGPT has made. In the context of that conversation, it remembers the correction, and appears to /somehow/ incorporate that correction in further responses. As to how it does this, and to what extent that "learning" is actually occuring, I have no clue.
@adfaklsdjf
@adfaklsdjf Жыл бұрын
it's not learning during individual conversations.. it's just generating new text based on the text that has come before. after you correct it, it's generating more text that is appropriate/likely given the previous text that includes the correction.. i use it for generating code sometimes and it's great at taking corrections and adjustments.. when i'm not satisfied with the result, i simply tell it why the code it's given me isn't what i'm looking for and ask it to change it and it does surprisingly well. it's not going to make my job obsolete (that's probably like 10 years in the future), but it's really helpful for rapidly producing some code in 3-5 minutes that would've taken me 30-45 minutes to write on my own.
@rob86353
@rob86353 Жыл бұрын
That's why I put learning in quotes - not implying core concept processing. Interaction based improvement to result quality. What I would be interested in seeing is, as we adjust our path costs and reposition on whatever conversation map ChatGPT uses, ie the contextual version of 'learning', how much of that conversation state can be retained or saved for future use, and how much that impacts overall conversational flexibility of that AI instance. I do agree on the code it generates being interesting. I tried throwing it at it some samples of generic programming tasks I have had to work on, and it generates some interesting results. Usually better than whatever my first draft would have been, but often with conceptual holes, inappropriately mixed code (ie powershell 5 mixed with PS 7), and a complete lack of input sanitation and/or (... usually and) dangerously unchecked calls to external functions.
@GilesBathgate
@GilesBathgate Жыл бұрын
I guess the internet (and hence ChatGPT's training data) is full of mission statements and code, but answers to physics exams, not so much ;)
@peterisawesomeplease
@peterisawesomeplease Жыл бұрын
I think there are plenty of answers to physics exams. The main issue is that you can parameterize questions infinitely which really throws off something that is mostly a search engine. It will find the closest question but even a slight change to popular question will have it give you the answer to the popular question not the question you asked.
@GilesBathgate
@GilesBathgate Жыл бұрын
If you think its like a search engine you probably haven't played with it enough. For me Its more like using a search engine for 10 minutes, reading various posts that people which contain answers to similar questions, and then forming a coherant paragraph based on the results, but it does it all in a fraction of a second, and by recalling "the internet" from memory :)
@adfaklsdjf
@adfaklsdjf Жыл бұрын
It's not a search engine.. it doesn't contain the text it was trained on. When it's answering a question, it's not scanning through its training data. Rather, it has mapped patterns that were observed from the data set. The model itself is no larger after training than before training
@peterisawesomeplease
@peterisawesomeplease Жыл бұрын
@@GilesBathgate I played with it quite a bit and it did feel a lot like a search engine that combines results from different websites. It clearly can go quite a bit beyond this. You can have it program things that no one has programed before quite easily. But this often relies on being so hyper specific that it feels like its just combining stack overflow answers in a clever way. The speed and is unimpressive to me. The model is pretrained.
@GilesBathgate
@GilesBathgate Жыл бұрын
@@peterisawesomeplease It does feel like that, but try talking to it about something completely made up, and it will also give you a comprehensible answer. Invent your own maths symbol, or some characters in a story that you've invented, (but not something published on the internet.)
@colourofmage
@colourofmage Жыл бұрын
Might be worth retrying these questions! The update from 30th Jan has improved factuality and mathematical functionality
@ErikDJ123
@ErikDJ123 Жыл бұрын
Still can't count words in a sentence.
@colourofmage
@colourofmage Жыл бұрын
@@ErikDJ123 Yeah I tested a simple equation it got it wrong 🤷‍♂️ maybe wait for GPT4 or for Google's version to be released. It's a matter of when not if I think
@calebderochea8304
@calebderochea8304 Жыл бұрын
I find that when you have it re-answer your question it sometimes provides you with a a different answer. After a few times you should get an answer which is either more sensible or less sensible in terms of answering physics questions.
@Baleur
@Baleur Жыл бұрын
2:50 it actually does... You can DESCRIBE the diagram to it, then say "using the diagram previously described, tell me why X" It can do incredible things, it's more about how creative you are in asking the question.
@LucasRodmo
@LucasRodmo Жыл бұрын
There's a thing that I noticed people don't try: when it gets a wrong answer, you can confront it and it will made a whole new thing and sometimes get the right answer after that. It's scary, it analyze its own answer and try to output a more reasonable one
@silverXnoise
@silverXnoise Жыл бұрын
Awesome to see Prof. Moriarty again!
@headdie89
@headdie89 Жыл бұрын
Great video :) and Thanks Mr Moriarty
@hplovecraftmacncheese2584
@hplovecraftmacncheese2584 Жыл бұрын
I'm a c#/Unity developer and I use ChatGPT sometimes to find quick solutions. The code is sometimes flawed, but it can serve as a quick-start from which to get something up and running.
@tonechild5929
@tonechild5929 Жыл бұрын
I played around with Chat GPT and after a few hours I felt like it would be faster to just write it myself. However Chat GPT is great for breaking past any type of "writers block" and if I get stuck on anything it's great to bounce ideas off of.
@BrandanLee
@BrandanLee Жыл бұрын
It's excellent for thinking out loud kinds of, "how would I express this" kinds of brainstorming too. Places where code just isn't in the logic yet.
@Finkelfunk
@Finkelfunk Жыл бұрын
I asked ChatGPT the other day about series convergence and it said that the harmonic series 1/x converges. That pretty much sums up why ChatGPT is useless when confronted with mildly complex questions.
@DeeP_BosE
@DeeP_BosE Жыл бұрын
The best use of GPT3 , i found is the data sorting based on filters or parameters. The data being more massive and obscure in various places on the web makes it shine even more.
@protocol6
@protocol6 Жыл бұрын
Try asking it to write a paper for a specific journal on a specific topic in the style of a specific prolific author using LaTeX. The results can be fun. Don't forget to ask it to continue if it stops abruptly.
@smaakjeks
@smaakjeks Жыл бұрын
I tested ChatGPT a while ago and it was terrible at understanding rules to a word game (e.g. name some things, but not things such as this, or such as that, first you, then me, one at a time), and it didn't apply the rules correctly (e.g. named some mammals, even though one of the rules was to not name mammals). A few weeks I check in again and there are big improvements. It's learning fast! Next time you ask it about physics it will probably be better.
@James-ip1tc
@James-ip1tc Жыл бұрын
You know it's just going to keep getting better. right now you're training it and it's learning.
@jeffsteyn7174
@jeffsteyn7174 Жыл бұрын
Try using prompt engineering. It will be more focused. Example tell it "act as a physics teacher" and then ask it the question. I asked it something relatively simple. I have 2m of wire. I want it cut in 450mm sections. How many sections can I get and what will the length of the final section be. It got the answer hopelessly incorrect. Then I added the same question and prefixed it with"act as a maths teacher". It explained every step and it got the answer correct.
@StormWolf01
@StormWolf01 Жыл бұрын
I majored in computer science years ago. I use chatGPT everyday. It's great to generate boilerplate code, but also to quickly generate code examples of things i'm not familiar with. For instance, i wanted to know about SVG images. I just ask him questions, and he generates the code. It's always an approximation of what i want, but i just tweak it a bit, and it works. I also use it to ask a lot of technical questions. I also use it as a dictionary and helper for language studies. You can ask him question, and ask him to answer in another language for instance. I mean, it's great all around. I haven't really used google or stackoverflow in the last 3 weeks.
@nao_chan_
@nao_chan_ Жыл бұрын
I wouldn't get addicted to it. you're essentially like a homeless person feeding yourself with free samples from the supermarket. they can pull the free trial stuff at any moment. if a commercial version ever appears it will likely cost a LOT of money to use. the servers to run this thing aren't cheap.
@NowayJose14
@NowayJose14 Жыл бұрын
Researchers have combined chat gpt with various other calculating tools, such as Wolfram alpha, and it seems to be a very powerful next step to enabling better and more precise computation.
@johnadriandodge
@johnadriandodge Жыл бұрын
Thank you for sharing Mr. Jedi Physics of Nottingham.
@greedowins2917
@greedowins2917 Жыл бұрын
I don't give the professor full marks, as he is unwilling to consider that the harmonic motion question was a bad one, or that the chatbot got it right for different (but not necessarily invalid) reasons.
@thequantumworld6960
@thequantumworld6960 Жыл бұрын
Its reasoning re. simple harmonic motion is invalid, however, unless we completely redefine what is meant by the equilibrium position. And if we're free to completely arbitrarily redefine terms/language then ChatGPT could be said to always be correct! Philip (speaking in video)
@jacklinde7568
@jacklinde7568 Жыл бұрын
"Committed to Excellence" Is there any word more nebulous. Me: Nebulous? I don't know about that. Cliché? Yes, definitively. Do you know how overused that phrase is? Heck the company I work for has it in as pat of their mission statement.
@AFastidiousCuber
@AFastidiousCuber Жыл бұрын
Well it is a bit nebulous. Excellence in what? What is excellence? It's more of an emotion word than a descriptive term. It would be way more precise just to say that you intend to be "better" than similar organizations or some baseline standard, but then you would open yourself to being challenged on your claims.
@vlndfee6481
@vlndfee6481 Жыл бұрын
Does it find a anwser from some one ( or many had) uploaded ? Or does it really gives a thoughtful answer ?
Bad Science and Room Temperature Superconductors - Sixty Symbols
16:37
Easy and Yumm Chacolate Hacks by 123 GO! SHORTS
00:41
123 GO! SHORTS
Рет қаралды 15 МЛН
Самый большой бутер в столовке! @krus-kos
00:42
Кушать Хочу
Рет қаралды 1,4 МЛН
Não sabe tomar Coca Cola 🥤
00:29
DUDU e CAROL
Рет қаралды 9 МЛН
He FOUND MYSTERY inside the GUMMY BEAR 😱🧸😂 #shorts
00:26
BROTHERS VLOG
Рет қаралды 25 МЛН
Sleeping Beauty Paradox - Numberphile
15:45
Numberphile
Рет қаралды 307 М.
Don't Use ChatGPT Until You Watch This Video
13:40
Leila Gharani
Рет қаралды 1,4 МЛН
I tried using AI. It scared me.
15:49
Tom Scott
Рет қаралды 7 МЛН
Attosecond Lasers (2023 Nobel Prize in Physics) - Sixty Symbols
23:05
Sixty Symbols
Рет қаралды 394 М.
Physics Professor watches "the goop lab" - Sixty Symbols
22:00
Sixty Symbols
Рет қаралды 877 М.
What's Really Happening At CERN
17:41
Cleo Abram
Рет қаралды 416 М.
Asking ChatGPT Tough Medical Questions
10:32
Doctor Mike
Рет қаралды 1,9 МЛН
Advanced ChatGPT Guide - How to build your own Chat GPT Site
37:09
Adrian Twarog
Рет қаралды 1,1 МЛН
СИНИЙ ЭКРАН СМЕРТИ - ОБЪЯСНЯЕМ
14:55