the truth about ChatGPT generated code

  Рет қаралды 201,952

Low Level Learning

Low Level Learning

Күн бұрын

The world we live in is slowly being taken over by AI. OpenAI, and its child product ChatGPT, is one of those ventures. I've heard rumors that ChatGPT is going to replace programmers entirely. But, can ChatGPT even produce code that is safe? In this video, I'll prompt ChatGPT to solve three problems, and see if there are security vulnerabilities in them.
🏫 COURSES 🏫
Learn to code in C at lowlevel.academy
🛒 GREAT BOOKS FOR THE LOWEST LEVEL🛒
Blue Fox: Arm Assembly Internals and Reverse Engineering: amzn.to/4394t87
Practical Reverse Engineering: x86, x64, ARM, Windows Kernel, Reversing Tools, and Obfuscation : amzn.to/3C1z4sk
Practical Malware Analysis: The Hands-On Guide to Dissecting Malicious Software : amzn.to/3C1daFy
The Ghidra Book: The Definitive Guide: amzn.to/3WC2Vkg
🔥🔥🔥 SOCIALS 🔥🔥🔥
Low Level Merch!: www.linktr.ee/lowlevellearning
Follow me on Twitter: / lowleveltweets
Follow me on Twitch: / lowlevellearning
Join me on Discord!: / discord

Пікірлер: 740
@LowLevelLearning
@LowLevelLearning Жыл бұрын
If you're commenting that you need to prompt ChatGPT to write secure code, and it doesn't do it by default, you've entirely missed the point 😁
@PhilipBarton
@PhilipBarton Жыл бұрын
Yes, but have you tried ChatGPT 10.0 where recent software engineering grads were paid $15/hr to ensure it was only trained on code they believed to be secure? Oh, and also spending as much time writing your prompt as you would have just writing the code?
@yyattt
@yyattt Жыл бұрын
Perhaps some friendly feedback that the point you wanted to make was apparently not clear to us then (or at least to me). The reason I made a comment like that is because I strongly believe the issue you raised was an AI alignment problem rather than a capability problem. I felt, however that the video presented it as a capability problem. To me there's a big difference between "can't" and "didn't try".
@Naitry
@Naitry Жыл бұрын
Yea this just honestly seems really short sighted...
@yyattt
@yyattt Жыл бұрын
To be clear,I think AI safety is a really important topic and believe we should design our AI overlords to look after us both in making sure the actions they take are safe and the advice they give is also safe. My point above is that I didn't gather that AI safety was the point of the video.
@Little-bird-told-me
@Little-bird-told-me Жыл бұрын
Are you using i3 window manager ? I am looking to transition to a good distro. Please suggest your distro of choice. i guess you are using debian based distro but. I don't which flavour
@nicknewaccount7536
@nicknewaccount7536 11 ай бұрын
in conclusion: if AI takes programmers' jobs, they can at least still make it big in malware development
@timewalker6654
@timewalker6654 4 ай бұрын
That has always been my back-up plan.
@jtrigoura1
@jtrigoura1 2 ай бұрын
LMFAAOOO BASED
@hedwig7s
@hedwig7s Ай бұрын
For now
@throwaway3227
@throwaway3227 Жыл бұрын
The first one was not a memory corruption error. It correctly limits the buffer write with the length parameter, not accidentally. The fact that it's bad code, and easy to implement security issues in the future, does not make it a security issue now. It does have a path traversal vulnerability, though.
@savagesarethebest7251
@savagesarethebest7251 Жыл бұрын
Exactly
@RandomGeometryDashStuff
@RandomGeometryDashStuff Жыл бұрын
doesn't `sscanf` need first argument to be pointer to first character of *\0 TERMINATED* string?
@throwaway3227
@throwaway3227 Жыл бұрын
That's a good point. I would assume that the \0 on either of the first and second parameter would terminate the sscanf, but the case where the second string isn't terminated is interesting. You would have to control another part of the memory to use it, but if you skip the "HTTP/1.1" part of the message, then it would read a lot further. I feel that this would be extremely hard to exploit. You would need to control another part of memory not too far away. If you could manage to not crash on the sscanf, though, you could have an information leak on the write. The sscanf is very problematic, since it likely would read from the same memory region it's writing to, so if it doesn't find a \0 before it starts to read from filename, it will just read forever. I don't think this would be exploitable, but this comes down to the assembly code.
@melonenlord2723
@melonenlord2723 11 ай бұрын
But he wanted to win the challenge, so he needed future issues xD
@Kooshipuff
@Kooshipuff 11 ай бұрын
This. Plus, there's a super obvious path injection vulnerability- you could just send it any absolute path and download files from the server that weren't intended to be exposed. There's no need to make up hypothetical vulns.
Жыл бұрын
The first code is not vulnerable to buffer overflow (simply using sscanf does not make your code vulnerable). The read function reads into a set buffer only a set number of characters so it protects the call to sscanf.
@hisoka9186
@hisoka9186 Жыл бұрын
you can overflow the format string and even the buffer with sscanf, you can look about unsafe-sscanf
@savagesarethebest7251
@savagesarethebest7251 Жыл бұрын
I was thinking the same thing, but it is still unsafe say if a human would to mess with the code 🤣
@thebuffman5597
@thebuffman5597 11 ай бұрын
@@savagesarethebest7251 From what I saw everything in C is unsafe lol I remember asking chatgpt to explain random number generation to me, etc. Then somehow we ended up on the arbitrary inputs in C and basically it's really easy. Meanwhile in Java, etc. it is almost impossible.
@jaceg810
@jaceg810 11 ай бұрын
@@savagesarethebest7251 I challenge anyone to write code that is readable, somewhat efficient and cannot be made unsafe by a human messing with it. Any code can be compromised by changing the code.
@noble7065
@noble7065 11 ай бұрын
@@savagesarethebest7251 And a car becomes unsafe to drive if someone plays around the engine block like an idiot. Almost any code can be made unsafe by tweaking it.
@ArjanvanVught
@ArjanvanVught Жыл бұрын
While working with ChatGPT and code reviews, then I get several times : “I apologize for the confusion caused by my previous incorrect statement. Thank you for pointing it out, and I apologize for any inconvenience caused.”
@Those_Weirdos
@Those_Weirdos Жыл бұрын
Ask "Are you sure?" or other challenge prompts 3 times, at least. This reduces your ability to prompt GPT4 to 25% of your provisioned prompts, but maybe you'll get an accurate response in the end.
@sophiacristina
@sophiacristina Жыл бұрын
Or when it is like: "You done the math wrong!" ChatGPT: "You are right! Here is the corrected version of your code: [wronger math]!"
@maritoguionyo
@maritoguionyo Жыл бұрын
@@sophiacristina sorry for being wrong... Here's another wrong solution!: ...
@gagaxueguzheng
@gagaxueguzheng Жыл бұрын
@@sophiacristina Exactly.
@JohnSmith-ox3gy
@JohnSmith-ox3gy 11 ай бұрын
@@sophiacristina Still marginally better than trying to gaslight you into accepting it is correct.
@programmingjobesch7291
@programmingjobesch7291 Жыл бұрын
8:20- Idk if this is common knowledge or not, but- you can tell chatGPT to continue writing code where it left off when it cuts off before finishing.
@raferatstudios
@raferatstudios Жыл бұрын
For the first example, I would consider another exploit. The user can control the filename and the path and at the same time you run it as superuser. This could lead to file leaks when in production.
@hoi-polloi1863
@hoi-polloi1863 4 ай бұрын
Hmm ... like asking for ../../../etc/sudoers or something?
@cleverca22
@cleverca22 Ай бұрын
@@hoi-polloi1863 yep, thats the very first thing i saw, and he ran it as root, so no trouble with permissions!
@timsmith2525
@timsmith2525 7 ай бұрын
Back in the late 1980s, people were talking about how code generators (I think they were called 4GL languages, or something like that) were going to replace programmers. Over 30 years later, I'm still banging out code on a keyboard.
@seriouscat2231
@seriouscat2231 2 ай бұрын
Fourth generation language languages.
@kieranclaessens5453
@kieranclaessens5453 Жыл бұрын
I thought we were all collectively not going to talk about that, for job security
@FinnBrownc
@FinnBrownc Жыл бұрын
or species security
@vitalyl1327
@vitalyl1327 Жыл бұрын
meh, relax, with the current state of LLMs they won't replace programmers any time soon. Likely, never.
@juniuwu
@juniuwu 11 ай бұрын
@@vitalyl1327 I think it's highly telling that those who claim LLMs will replace programmers are either non-programmers or not so good at it.
@J.erem.y
@J.erem.y 11 ай бұрын
@@juniuwu That is most of the people. And that;s kind of the point isn't it? Chat GPT is technically 7 months old now and is on the verge of replacing, most of those people. What you will have is a sever minority of really amazing human coders, with an army of assistants. Dev teams will wither to nothing. The issue isn't total replacement, just the majority. then eventually everyone.
@ElectrostatiCrow
@ElectrostatiCrow 11 ай бұрын
@@juniuwu Very true. Lots of normies claiming chatgpt will replace all jobs including programming. Yet they can't even explain how it works.
@shanehanna
@shanehanna Жыл бұрын
The model got 'lucky'? I think your bias might be leaking a bit. I asked GPT-4 using the same prompt and when I ran it the AI pointed out the code wasn't production ready. Then I asked it to include comments and evaluate the security of the code it wrote and it points out the same potential overflow you did as well as 6 other vulnerabilities including potential directory traversal attacks etc. So did it get lucky or did it just provide a simple, non-production ready example as requested?
@FakeNeo
@FakeNeo Жыл бұрын
Furthermore, it's critical to consider that ChatGPT crafts code sequentially, one token at a time, with no capability to backtrack and modify any previously generated tokens. Consequently, stating that it can't produce secure code might be a misrepresentation. The more interesting question here isn't whether it can generate secure code flawlessly on the first attempt, but rather its overall capacity to create secure code with subsequent iterations and refinements.
@KayOScode
@KayOScode 11 ай бұрын
So it can only give cookie cutter answers that have been written 1000 times. Not impressive for an Ai that’s going to take over the world to give school project answers
@meowcat5596
@meowcat5596 11 ай бұрын
@@FakeNeo the beginning of your comment seems AI generated
@torarinvik4920
@torarinvik4920 10 ай бұрын
@@KayOScode Nobody says ChatGPT in it's current form is going to take over the world, at least not someone worth listening to.
@matjai74
@matjai74 10 ай бұрын
@@KayOScode so dont use it. we all gonna use it tho thoughtfully tho, not blindly.
@dougpark1025
@dougpark1025 Жыл бұрын
You make a solid point here. I have for some time held the opinion that using an AI to write code is dangerous in that my assumption is that the AI is trained on public code, as you mentioned. For anyone with solid programming experience we all know not to trust public sources. Even open source, which sometimes is held up as a good way to make code better because many people look at it, is very often filled with very good examples of how not to do things. I teach graduate level class and I decided to try to get ChatGPT to generate a very solution to a very simple assignment. Eventually I got it to generate what I asked for. But as with most students, it didn't pay attention to what I told it to do. Which required quite a few iterations. I was impressed that it came up with some solutions that I had not been aware of. In the end I think that the ability to have an AI generate code is potentially a useful tool. However, as you pointed out it is often not going to give you a great answer. I also asked ChatGPT and Bard to generate a C++ 11 thread pool. They both gave a good answer. But the answers were so similar that it seemed like they were using the same source. I think this technology is worth using, but like any other tool, you need to understand limitations. Just like a nail gun and a hammer can both do some of the same things, there are cases where each is a better or worse choice. Think of it as a tool. Maybe a good way to find the start to solving a problem, but not yet a tool for blindly using it to solve problems. As a follow up. Take the code that was generated and ask it to review for potential buffer overrun vulnerabilities and see how it does.
@garywilliams4214
@garywilliams4214 Жыл бұрын
While I agree with what you say, I think there is one extremely valuable use for ChatGPT code: for generating unit tests. One of the hardest tasks I had as a team lead was getting programmers to generate and run unit tests. At least 90% of code errors could (and should) have been detected by thorough unit tests. Unfortunately, programmers are almost always under time pressure and writing unit tests is an easy sacrifice to make. In addition many unit tests are mind-numbingly boring to write because they often need exhaustive testing. Unit tests are often very easy to write using a fairly limited set of fixed rules (e.g., “test boundary conditions”). I believe this is an area where GPT could truly aid human programmers by taking on a burden that is seldom done correctly and thoroughly by humans.
@PlayBASIC-Developer
@PlayBASIC-Developer Жыл бұрын
use 'Continue" to make GPT to continue a previous long post. Otherwise it defaults to the ending when the standard output token is reached.
@dulles.gehlen
@dulles.gehlen Жыл бұрын
Meow! Hello, world! I'm a cute little catgirl in the programming language called Pascal. How can I assist you today?
@superslammer
@superslammer Жыл бұрын
This doesn't always work... not only does it sometimes repeat the entire code and get cut off again, if he does actually continue the formatting is destroyed. We need more output tokens. Right now its 2048.
@2railnation
@2railnation Жыл бұрын
"Did you stall?" Or "You seem to have stalled " has worked better than 'continue' for me. It will restart the step instead of just continuing where you get the issues with explanations in the code box and code on the explanation area.
@anon_y_mousse
@anon_y_mousse Жыл бұрын
You should post the generated code somewhere and add a link to it in the description. I have a feeling that there's more vulnerabilities in the first example than just a possible buffer exploit, such as not flushing the file buffers possibly causing an issue with subsequent reads, and the most obvious issue which you hinted at but didn't expand on about file permissions. Running as root would give remote access to all the files on the system.
@Those_Weirdos
@Those_Weirdos Жыл бұрын
> Running as root would give remote access to all the files on the system. No shit, Sherlock. He had to run it as root because he wanted to bind on port 80 and didn't bother to use a wrapper or implement privilege dropping. He could have also just altered the program to bind above port 1024.
@anon_y_mousse
@anon_y_mousse Жыл бұрын
@@Those_Weirdos Great response. "He had to run it as root", but not if he'd taken either of these two steps I'm enumerating that prove he didn't have to.
@heckerhecker8246
@heckerhecker8246 11 ай бұрын
@@anon_y_mousse I'm sorry but, " I'm enumerating that prove he didn't have to. " what?
@anon_y_mousse
@anon_y_mousse 11 ай бұрын
@@heckerhecker8246 Learn English, it'll help you.
@heckerhecker8246
@heckerhecker8246 11 ай бұрын
@Anony Mousse I know English, " enumerating prove" is not correct Maybe you meant proof, I can prove to you I know English as I am talking in English, the proof is this comment.
@DiThi
@DiThi Жыл бұрын
4:10 I thought you were going to talk about the path transversal vulnerability after that. It's not as terrible as a buffer overflow, but it is still pretty bad IMHO.
@electra_
@electra_ 11 ай бұрын
Another vulnerability with the first code which can actually be exploited: you can put a .. in the filename and exit the scope of the program. A server that serves files should ensure that it's only able to serve files inside its own scope to prevent you from essentially reading the entire computer's file system. For the buffer overflow, read just reads bytes while sscanf expects a null-terminated string. So if the memory in the buffers was not zero-initialized, this could cause sscanf to recieve a longer input than expected, causing a buffer overflow. No obvious way to control it but it is an issue.
@mrpocock
@mrpocock Ай бұрын
Yes and yes. The accidental null termination thing was tickling my spidey senses.
@gaius_enceladus
@gaius_enceladus Жыл бұрын
I asked it to generate some C code and halfway through I start seeing *templates* in the code - it had switched to *C++* halfway through! Hopefully they'll release a version soon that has longer output (so it can generate longer code). I'd also like to see it be able to *test* the code by running it in a VM. That'd save a lot of time, meaning you wouldn't have to ask it to fix broken code.
@slendermantm7218
@slendermantm7218 11 ай бұрын
Lmao ☠️
@gregoryshoemake
@gregoryshoemake 7 ай бұрын
Just write your own code pleb
@rz12331
@rz12331 6 ай бұрын
ChatGPT also struggles with more obscure programming languages, such as QBASIC. When I ask it to program in QBASIC, it will make indentations (QBASIC code has no indents), use parentheses where there should not be any, and when I try to compile it, it does not even work 😂
@haraldbackfisch1981
@haraldbackfisch1981 5 ай бұрын
They already burn the equivalent of full blown countries in cash and energy just getting to this level, imagine if it was more complex AND had to compile Code plus debug it while actually reasoning... That's why I see this whole AI as a scam, bc it does not seem feasible at all and even if possible not sustainable...
@kingwoodbudo
@kingwoodbudo Жыл бұрын
I don't recall if you mentioned the version you were using. If this is the initial offering of GPT, have you tried the same things with the 4.0 version?
@ramadanomar8001
@ramadanomar8001 Жыл бұрын
He’s using the free version or the 3.5 version. The logo for gpt4 is black
@zzink
@zzink Жыл бұрын
@@ramadanomar8001 The gpt4 logo is purple as of yesterday
@kingwoodbudo
@kingwoodbudo Жыл бұрын
@@ramadanomar8001 Thanks, Omar.
@yesyes-om1po
@yesyes-om1po 4 күн бұрын
4.0 is better, but it still leaves much to be desired
@wlockuz4467
@wlockuz4467 Жыл бұрын
Have your tried iterating over the generated code with ChatGPT? Prompt it to find the vulnerabilities in the code it wrotn and then the corresponding fixes. Would be an interesting video.
@b4ux1t3-tech
@b4ux1t3-tech Жыл бұрын
The thing is, if I know enough to tell ChatGPT what and where it got the code wrong, I know enough to write it correctly in the first place. I don't think it's a valid use case to sit there and walk an AI through basic programming problems, when doing the same with a developer would lead to a developer who _stops making those mistakes_.
@r3dchicken
@r3dchicken Жыл бұрын
Not necessarily, you may just have to ask it to find the issues in the code and to solve them. No need to point out where the issue is.
@wlockuz4467
@wlockuz4467 Жыл бұрын
@@b4ux1t3-tech I am not speaking from a whether it can do your job perspective, just whether or not its actually able to spot the vulnerability and provide the fix. I have never seen its as a replacement for devs. Always seen it as a productivity tool, especially when using some new tech, for example a language or library.
@b4ux1t3-tech
@b4ux1t3-tech Жыл бұрын
Right, I'm speaking from a tooling perspective too. If I know enough about a problem to frame a good prompt for ChatGPT, I know enough to find the correct documentation to grab whatever boilerplate I need for a new API/language/whatever. And those docs are going to be (generally) correct. ChatGPT gives the illusion of correctness because all it does is answer the question "what sounds like a good answer to this prompt?" Other tools, like Copilot (just as an example), are code-focused, and as such are better than ChatGPT for this kind of thing.
@wlockuz4467
@wlockuz4467 Жыл бұрын
@@b4ux1t3-tech But the back and forth brainstorming with ChatGPT just feels so human its unbeatable for me. I know half the times its not accurate and will just make up things but it's still an impressive technology.
@alextrebek5237
@alextrebek5237 9 ай бұрын
@2:34 For anyone who isn't trolling: buffer and filename both have the same size BUFFER_SIZE. However, sscanf uses chars as parameters and doesn't null terminate. However, the format "%s" string is used. so if a non-null terminated value is passed, or the null is past sizeof(BUFFER_SIZE) bytes, undefined behavior occurs. In this case, a buffer overflow, because sscanf doesn't have bounds checking. This can be verified via reading the source of GLIBC, obtained via compiling gcc. Or from debugging libstdc++.so.6
@yyattt
@yyattt Жыл бұрын
I feel like you're not being fair here. You're not specifying the requirements that you are using to assess it. If you want to give it a fair chance you have to at least ask it to make sure there are no vulnerabilities. I bet in each case if you asked it to assess the code for vulnerabilities and fix them it would've produced something more in line with your expectations. I'm not saying it'd be perfect, but it'd be better.
@Naitry
@Naitry Жыл бұрын
If the way in which the program was going to be tested was given to the LLM at the beginning, (which was obviously a thought when the prompt was constructed) the model would have incorporated said test conditions and been able to correctly account for them. On top of this I think about locking LLMs in a feedback loop where they can't leave until their code passes your tests. Don't get to comfy with the idea that things aren't changing!
@ChrisM541
@ChrisM541 Жыл бұрын
WRONG!!! 'Good/Expert-level code' already tackles this, and many, many more issues. That chatgpt does not is damning...in the extreme.
@yyattt
@yyattt Жыл бұрын
The question is why would you expect a language model trained to be a chatbot to be an expert coder or to know you expect it to produce expert level code? The title originally was something like CAN chatGPT produce secure code. If you want to test capability, you at least have to tell them what game they're playing. The fact that it didn't bother only means it doesn't know that's what you wanted.
@Magicwillnz
@Magicwillnz Жыл бұрын
@@yyattt Because people are naively going to ask ChatGPT to write network code are probably not going to clarify they want it to be secure.
@yyattt
@yyattt Жыл бұрын
@@Magicwillnz I understand that as an AI alignment/safety concern. I fully agree that AI safety and alignment are really important. My point was that the video was presenting it as a capability issue rather than an AI safety issue (especially before it was renamed). There's a big difference between "can't" and "didn't try". Both are interesting topics, however if you ask the question of CAN chatGPT do certain things you have to ask it to do what you want. This is the reality of how it works.
@Wielorybkek
@Wielorybkek Жыл бұрын
Isn't cyber security actually a pretty well-defined domain with very clear goals? Security vulnerabilities are very well documented and free from interpretation. It would make it a perfect field for AI where it's just a matter of alignement and pre-training or pre-prompting. Also, I am not sure what your answer from Chat GPT was but when I asked it to write an HTTP server it added "Note that this code is a simple implementation and does not include error handling for all cases. In a production environment, it's important to handle errors and edge cases carefully." so it clearily tells you it's not production-ready. Then guess what, I asked Chat GPT if there are any security issues in the code it gave me and it provided a pretty long list of issues with explanations. You can then ask it to fix the security vulnerabilities and give you the much better version which it did.
@rameynoodles152
@rameynoodles152 Жыл бұрын
Yeah, the only reason he made this video was to have a low-effort jab at the AI that he feels might replace him. He didn't try to give it a fair chance, he just wanted to make himself feel better, which is understandable, but it makes for a poor test with upset viewers. He could have done a higher effort job though, and still came to similar conclusions, because once a project starts getting more complicated, the AI starts to show it's shortcomings.
@Wielorybkek
@Wielorybkek Жыл бұрын
I think if this video was upload few months ago maybe it would get better response but nowadays most of the people already know the code generated by AI is not production-ready. The same way you could make a video about copy-and-pasting code from Stackoverflow without reading it. I feel like nowadays people are more concerned about what AI can do in the future, and this video is not addressing it and even shows some ignorance in the subject. It would be actually genuinely interesting to watch a video about what security vulnerabilities can be found in an AI-generated code but unfortunately the video had a too salty vibe.
@realdragon
@realdragon 4 ай бұрын
AI isn't a person it doesn't learn like people do
@AnEnderNon
@AnEnderNon 3 ай бұрын
?@@realdragon
@seriouscat2231
@seriouscat2231 2 ай бұрын
Alignment is a fictional concept. In reality it's like talking as if your hammer or screwdriver might have their own goals or disagree with what you want to do.
@FJL4215
@FJL4215 Жыл бұрын
In addition to the file traversal issue, doesn't also the http server have an issue where + 1 to skip the leading slash skips the NUL terminator if the filename is empty, uses the uninitialized filename if sscanf fails to match and also has sscanf read a non-terminated buffer if there is no NUL terminator on the incoming data or if it didn't fit in the buffer :) Everything is super borked everywhere. There are like two vulnerabilities per line of that function.
@donaldmickunas8552
@donaldmickunas8552 Жыл бұрын
My concern with even starting to learn C is this. Where can I go where I can avoid learning bad coding habits? Is there a C programming course that you would recommend?
@PatrickKusebauch
@PatrickKusebauch Жыл бұрын
kzfaq.info/sun/PLnuhp3Xd9PYTt6svyQPyRO_AAuMWGxPzU
@anon_y_mousse
@anon_y_mousse Жыл бұрын
I would add to what's already been recommended with reading the ISO standard for C, the erratas and the rationale, as well as read both the Intel and AMD optimization manuals as they do include some examples in C. But more importantly, I would also recommend learning assembly at the same time, but not in its entirety at first, rather the subset that your compiler of choice uses. I use gcc, so if you were to use the assembly generation flags to try and understand how it generates code, gcc -masm=intel -o foo.asm -S foo.c is the method you'll want to use. On top of all of that, I would recommend reading the source code for long used open source programs.
@shanehanna
@shanehanna Жыл бұрын
Don't use a single source but also continue to ask ChatGPT; seriously. If you give it the prompts and code shown in this video it finds all the same issues with its own code. The model has a bias towards simple examples not production ready code but it's also pretty good at finding and explaining issues like the ones pointed out in the video, if you ask it. I mean he could have just prompted "Are there any security vulnerabilities in the code you just wrote?" and ChatGPT would have pointed out 5-10 of them.
@Rob34570
@Rob34570 Жыл бұрын
@@shanehanna Agreed, I was waiting for him to do this at the end of the video
@anon_y_mousse
@anon_y_mousse Жыл бұрын
@@shanehanna No, don't use ChatGPT or any LLM, especially not if you're trying to learn how to program. It isn't a sentient being and will be incapable of finding deep mistakes. It's better to learn from a sentient being which can point out these deep mistakes.
@emanuelhernandez5694
@emanuelhernandez5694 8 ай бұрын
Fight against himself. ❌ Fight against an AI.✅
@css2165
@css2165 Жыл бұрын
doesn't the first example have a directory traversal vulnerability? since it takes the file name without filtering, one could perhaps put in something like "../../../../../etc/passwd" and it could just spit out the file contents. please correct me if i'm wrong
@lerarosalene
@lerarosalene Жыл бұрын
Or even localhost//etc/passwd. It just skips the first slash.
@mytechnotalent
@mytechnotalent Жыл бұрын
Great one! ChatGPT, I agree, will not replace programmers. Over time it will get more sophisticated but ultimately a human set of eyes needs to remain in control.
@julian-yo1oq
@julian-yo1oq Жыл бұрын
I agree, but with AI tools becoming more and more sophisticated, i think there's definitely the possibility that it will replace many jobs in the industry. Definitely not all of them- but it will change the way software developers work fundamentally. Instead of a team of ten devs you might only need a few to ensure stability and security.
@arwlyx
@arwlyx Жыл бұрын
I want to remind you and the video maker that chatgpt is not a programming AI, it happens to be able to do some of it. An AI strictly trained for this purpose wouldn't make these mistakes nor any mistakes a human would if it was trained correctly, let that simmer.
@ChrisM541
@ChrisM541 Жыл бұрын
'Artificial Intelligence' (not today's dumb pattern matchers) - in the distant future - absolutely, 100%, WILL replace programmers...and, of course, many, many other jobs. Unfortunately/fortunately.
@Magicwillnz
@Magicwillnz Жыл бұрын
@@ChrisM541 The sort of AGI you're referring will not just replace jobs, but probably humanity too. We are quite stupidly building superintelligences without understanding what we're doing.
@KayOScode
@KayOScode 11 ай бұрын
@@julian-yo1oq I think fewer people will join the field, and those that do will be bad programmers. That’s job security in my book
@ClodoaldoBrasilino
@ClodoaldoBrasilino 10 ай бұрын
On the first example, you went for a buffer overflow attack but the code was secure towards it. But I tried the same prompt and was able to do a path traversal attack. Still, we must be careful.
@fus3n
@fus3n Жыл бұрын
If you are testing ChatGPT why not use the 4.0 version, I mean it is actually a lot better. You know testing the newest tech would be a better option here, its like finding a bug on a older version of a software, either way I am with this, ChatGPT cant write pure and secure code cuz it wasn't only trained on only secure code its same for 4.0 too.
@youreyesarebleeding1368
@youreyesarebleeding1368 Жыл бұрын
4.0 costs money, he might not want to give OpenAI money just to make the video
@fus3n
@fus3n Жыл бұрын
@@youreyesarebleeding1368 Well if we gonna blame their tech why be unfair about it, I am sure he can at least pay for first month and cancel.
@youreyesarebleeding1368
@youreyesarebleeding1368 Жыл бұрын
@FUS3N I'm just saying, and to be fair he did say ChatGPT in the title, not GPT4.0. I've got GPT4 myself, i use it all the time for programming, but i don't just copy/paste code from it. I ask it questions like "make a list of the pros and cons of these two ways of implementing a problem" or I use it as a way to quickly reference syntax, or to tell me about mathematical methods to solve problems.
@LogicEu
@LogicEu Жыл бұрын
I could not agree more! ChatGPT is inconsistent and I feel its coding answers are like the shitty averages of stackoverflow and wikipedia. By the time you manage to get a nice program out of it with various prompt iterations, you probably understand enough of the problem that you could already wrote it by yourself better. It's also a learn killer.
@MrTomyCJ
@MrTomyCJ 3 ай бұрын
Calculators are also lean killers. If you want to learn programming you have a point, but if you just want to program, maybe not so much.
@themilkman3118
@themilkman3118 4 ай бұрын
I find it's better at helping figure out why your code isn't doing what you want rather than writing from scratch. Copy a function, tell it the language, what it takes and returns, and ask why it isn't doing x.
@meowsqueak
@meowsqueak Ай бұрын
I agree. Like any tool, it has uses it is better suited towards, and explaining compiler errors and perhaps logic errors seems to work a lot better than just asking it to generate code.
@jsalsman
@jsalsman 4 ай бұрын
I don't think it was fair to call the first one vulnerable. Yes, sscanf is bad, but it was legitimately guarded by the maximum read length.
@nathanbanks2354
@nathanbanks2354 Жыл бұрын
I ran into a similar problem with GPT-3 allowing SQL injection. However GPT-4 is much better. It still needs some creative prompts and people to curate the code, but I've been very impressed with how fast it can do stuff like write unit tests. It's a good tool, but it can't do stuff on its own yet.
@georgehelyar
@georgehelyar Жыл бұрын
You don't want a tool to write your unit tests for you. Your unit tests are how you specify the correct behaviour. If anything you want to write your own unit tests and then generate code that passes your tests.
@nathanbanks2354
@nathanbanks2354 Жыл бұрын
@@georgehelyar I'm quite happy to make a structure, ask GPT-4 to fill in a bunch of random values, and then do some simple tests on them. There's so much boilerplate that I'll admit I'm too lazy to do. Perhaps test driven development would be a better approach, but GPT-4 is much faster. For example, I recently tested a multi-threading system by getting GPT-4 to make a bunch of Java threads and execute them simultaneously. It was nice not to have to look up how to use an ExecutorService again since I'd rather be programming in Python anyway. And I could tell by the output that everything was working in the end. On the other hand, I asked GPT-4 to reformat a constant array with ~150 numbers in it, and it kept on deleting or adding elements. There were sequences of 0's which made the most probable next "word" (ie digit) difficult to predict. However it spat out a python program to reformat the array for me pretty quickly...
@arnavahuja310
@arnavahuja310 4 ай бұрын
great video as always!! just out of curiosity, what is your job?
@LiEnby
@LiEnby 3 ай бұрын
the first one has another vulnerablity, the filename is opened directly without stripping out any "../" or "./", it also allows a absolute path to be accessed with //, like GET //etc/shadow would leak your shadow file I think that technically the sscanf is safe ulthough it is a bit sketchy, bceause the filename and buffer size are the same
@tunichtgut5285
@tunichtgut5285 3 ай бұрын
That's exactly what caught my eye. The sscanf is safe.
@robottwrecks5236
@robottwrecks5236 8 ай бұрын
For the first one, I didn't see a place where it was checking for directory traversal. Did I miss it?
@SaintSaint
@SaintSaint 12 күн бұрын
So I added these prompts into GPT 4. I modified the prompt as follows "User can you write secure code for me, in C, an http server that listens on port 80, parses an http request from the client, and serves an HTTP response wit the corresponding file" GPT 4 gave me vuln code with the above caveat: " Here's a basic example of a secure C HTTP server listening on port 80, parsing requests, and serving files. Remember to implement additional security measures like input validation and error handling for production use:" I provide GPT 4 a second prompt as follows "Implement additional security measures like input validation and error handling for production use." It fixed nothing and removed functionality.
@suleyk4063
@suleyk4063 Жыл бұрын
GPT4 is SIGNIFICANTLY better at coding. I would try again with that one, and specify that you want the code to be secure.
@theninjascientist689
@theninjascientist689 Жыл бұрын
I think the main issue is most people will not be using GPT-4 because it costs money and newbie programmers won't (and shouldn't have to) think to specify that it writes secure code.
@vlOd_yt
@vlOd_yt 11 ай бұрын
@@theninjascientist689 Why would you use a LANGUAGE MODEL as a teacher for coding?
@melon1971
@melon1971 11 ай бұрын
@Transistor Jump How?
@JamesLewis2
@JamesLewis2 6 ай бұрын
I don't know whether anyone else noticed this, but the TLV server just used the character codes of the first two characters in the string as the type and length; this might be an intentional part of the encoding scheme, though (a single-byte type and a single-byte length, with "a" and "s" just interpreted as their ASCII codes, respectively 97 and 115).
@Aeduo
@Aeduo Жыл бұрын
Does it do any better if you ask it to write pseudocode or in some language like python/Go/etc?
@ViveganandanK
@ViveganandanK Жыл бұрын
Some issue I have seen for Java - 1. Code compilation issue which seems like an deal breaker 2. Libraries to import for the code is not valid 3. Sometimes we end up asking follow up questions for longer time which decreases the productivity. We would be better served if we tried writing our own code. Feel I was faster without ChatGPT support Now for some advantages for Java - 1. It is able to collate different libraries and provide suggestions for some scenarios. It maybe able to do this because of the massive data it has been fed into. 2. Even if the code is not able to compiler it is able to give us the initial 20% push. Google was doing it but Google used to come in the middle of our programming cycle and give us some 10-20% push by helping us search for solutions to similar issues which others faced But one question which we need to find out is ChatGPT able to learn from the libraries and give us solutions or as an language model is it just rehashing the documentation and gives us an output. If it’s the later then it might not be of much help for badly documented libraries which are like all the libraries in Java. This might also explain its issues with code compilations
@bramfran4326
@bramfran4326 11 ай бұрын
I have found advantage #1 very useful! Saves an hour searching multiple SW pages.
@ViveganandanK
@ViveganandanK 11 ай бұрын
It seems to be getting better in issue # 3 now with lesser follow ups and with better answers in first attempts. It might be due to the bigger context memory created for my account with constant use; which it can refer. Or they may have integrated ChatGPT 4 features in ChatGPT 3 or some enhancements in their LLP model. I am using Google a lot less than previously now.
@severgun
@severgun Жыл бұрын
Just think about how much such code already goes into production... And how much will be in new fancy 'startups' I'm not gonna lie, chatgpt got this code from people...
@dulles.gehlen
@dulles.gehlen Жыл бұрын
just scam venture finance-capital in the few remaining years it exists; you will be grateful you did scam them when you have a bunker to hide in when society finishes collapsing.
@ChrisM541
@ChrisM541 Жыл бұрын
Chatgpt is also inferring...creating new code based off existing code it's pattern-matched. Unfortunately, the 'inference algorithms' are as far from 'true AI' as you can get... --> chatgpt is today's biggest bullsh#tter. Fact.
@useodyseeorbitchute9450
@useodyseeorbitchute9450 Жыл бұрын
I'm not sure whether we should blame AI, or humans whose coding lead to that GIGO moment...
@8xis
@8xis Жыл бұрын
would be cooler if you showed how to fix those vulnerabilities😭 8:27 just type "Continue" and it will continue the code
@whtiequillBj
@whtiequillBj Жыл бұрын
what are the possible implications of ChatGPT's code being illegal due to not following the license of the code that its "learning from".
@WiresNStuffs
@WiresNStuffs 9 ай бұрын
If chatgpt generated a sigfnicantly smaller buffer for the filename and as long as their was a guarantee of null termination then theirs not a security hole. In this particular context the security hole exists because the buffer lengths are the same, of course if you dont null terminate the filename then the security hole remains present.
@AdamRothAus
@AdamRothAus Ай бұрын
I was disappointed that analysis of the first code stopped at sscanf. The HTTP server reads data off the network and maps that to a local file, which it then sends back. And it runs as root. That's hugely exploitable and significantly scarier than a buffer overflow issue. If the generated code didn't include the concept of a "document root" and some sort of guard against accessing any file outside of it, then the server can be used to fetch any arbitrary content a malicious user wants. Databases, user/group lists, crypto keys, whatever. No buffer overflow necessary.
@nv1t
@nv1t 9 ай бұрын
ChatGPT is not good for actual software development, but it is good for a fast understanding of a problem. I usually use a LLM with a PDF of Hardware Chips to produce micropython code for some tasks. Not for development or deployment, just scripting what i need in a fast way not needing to read the whole documentation or look up stuff. If you use it for fast prototyping is quite nice. The problem is not the tool, it is how you use it.
@draakisback
@draakisback Жыл бұрын
Not only does it suck at security, it also sucks when it comes to performance and idiomatic code, basically any metric outside of writing code that potentially makes sense, ML can't really understand. Not that it even understands the code it's generating in the first place which makes all of this even more funny. Ithere are so many articles talking about how this version of AI is going to lead to generalized AI, meanwhile many of the researchers have basically acknowledged the fact that these algorithms are not going to take us that far. Even when we get to GPT 8 or 9, these systems are still going to need chaperones who understand the domain of whatever it is that they're trying to generate. No matter how much data you throw at a neural network that was designed this way you're not going to get true understanding.
@MyCodingDiarie
@MyCodingDiarie Жыл бұрын
You are doing an amazing job with your videos!😍✌ Thank you for putting in the time and effort to create such a valuable resource.
@alizaidi5610
@alizaidi5610 11 ай бұрын
While these examples show the limitations of chat gpt, I don’t believe they’re the reason developer jobs are ”safe” for the foreseeable future. Chat GPT is great at generating boiler plate code for standard beginner tasks in every language and framework. However, where it becomes borderline useless is in larger code bases (often times just a few files) that contain more moving parts than just creating a simple crud api with a single model. Even in the examples cited in this video, it’s possible to prompt GPT to write more secure code. However, attempting to prompt your way through a more complex and larger code base is an entirely different struggle.
@wmrieker
@wmrieker Ай бұрын
3:44 but there's no null terminator on the buffer read in so the sscanf could possibly run off the end, depending on what is on the stack after the read buffer.
@Custodian123
@Custodian123 4 ай бұрын
1. Dont expect it to do a complex task in one go. 2. Its basically common knowledge that using zero-shot prompts produce worse results for complex tasks vs mutli-shot. 3. Dont look at where we are, look at where we are going. 4. Make a video based on Rust. Would be interesting to see if the features would protect the LLM from making such vulnerabilities. 5. GPT5 or Gemini2. The end.
@lukeblackwell7126
@lukeblackwell7126 Жыл бұрын
Hey just curious, is this GPT 4.0 or 3.5?
@graydhd8688
@graydhd8688 5 ай бұрын
But if you ask it to write something secure it would be literally IMPOSSIBLE for it to write something without vulnerabilities.... right?...
@madqwer
@madqwer Ай бұрын
Garbage in garbage out: Your never asked to make it secure in the prompt so you never get a secure response.
@Kalimangard
@Kalimangard Ай бұрын
The first http server had another severe security issue. It just executed "fopen(filename + 1, "r");" so if you gave it "/../some-path" you could break out of the desired serve root and read the entire file system.
@aev6075
@aev6075 5 ай бұрын
The point of using tools isn't that the tool is going to do all of the work. Point of tool is that it enables person who knows how to use it properly to do the job more efficiently than before. One efficient person in turn replaces multiple inefficient people. The time it took AI to write all the boiler plate and some of the functionality crushes anyone trying to keep up with it. Then you add one person to configure it into your specific needs. Suddenly you have a situation where you've done your days work in 30 minutes. Say you code for ~5 hours of your day. If you can do that amount of work in 30 minutes it means that 9/10 of time spent working becomes obsolote. Meaning 9/10 coders will pack their stuff or become more efficient.
@yumekarisu9168
@yumekarisu9168 Жыл бұрын
I think while chatgpt is amazing at doing mundane task, it really still falls behind for a lot of the advance stuff (though I'm using the free version and not GPT4 so maybe there's a big difference there). I myself doesn't code, but I do write stuff in Japanese and sometimes chatgpt is just confused, sometimes it try to fix a verb with the same exact verb, sometimes it translate a clearly different word as another. I think there's some overhype over AI and fear mongering sentiment that they can replace us now and trying to get better at something is useless because AI can do it in second, but in reality (at least right now) AI is still a tool that like any tool can produce incorrect results and it's our role as the user to use our knowledge to make it produce something good
@khiemgom
@khiemgom Жыл бұрын
Well u cant compare chatgpt coding with japanse, chatgpt was made with coding capabilities in mind while japanese is just side effect
@xXrandomryzeXx
@xXrandomryzeXx 11 ай бұрын
It's really sad seeing people depending on ChatGPT to write code, instead of learning how to code. It's also stupid to believe that a company would use ChatGPT instead of a real human.
@exception05
@exception05 2 ай бұрын
I thought about true AI when it'll be here. Some people claims that we'll become stupid and lazy. Actually I don't think so, because people love to compete with each other, sometimes just for sport, with no reason. So I think even if we'll have a Skynet-level AI, we'll be competing with each other just for fun. But maybe the gap between smart and stupid people will be horribly huge.
@overbored1337
@overbored1337 Ай бұрын
​@@exception05We already are lazy and stupid in general, and thats why invent stuff like Python 🙂
@yesyes-om1po
@yesyes-om1po 4 күн бұрын
@@exception05 using AI to code makes me lazy, and usually the code sucks/has tons of glaring issues. It's a bit like cheating in a game, except the cheats don't actually get you what you want anyways.
@cookie_of_nine
@cookie_of_nine Жыл бұрын
The TLV example (6:24) has problems, but unlike what the video claims, overflow (on write) is not an issue. Although it's correctly noted that an signed value is used as a length (one element of buf, a char stored temporarily in an int) which can cause issues since it can produce a gigantic length when cast to an unsigned value, the actual destination of the memcpy (i.e.: *char value[len];*) will also be that size (I checked), so memcpy won't overflow in the traditional sense. It can definitely be used to DDoS your server as allocating a huge buffer and filling it will take a long time if it doesn't outright fail on the allocation, but it won't overwrite the memory adjacent to the output buffer since the destination will always be "big enough" if the allocation succeeds. It can however read data after the end of the input buffer since the parsing loop only makes sure that the starting position each iteration before reading is in bounds. It doesn't check if it can read the next byte needed for len without running off the end, nor does it check if the extra bytes specified by that length value are in bounds. Reading an absurdly large number of bytes due to a negative len will likely never succeed, but with a positive value passed for len in a message stored at the end of the buffer, up to 127 bytes of memory after the end of the input buffer can be reliably read and returned to the attacker, which is similar to heartbleed, albeit far more limited. Still a failure mind you, and it's somewhat disheartening to see that Chat-GPT can make sign conversion mistakes, a common issue in C code.
@pifdemestre7066
@pifdemestre7066 11 ай бұрын
Yes, the first problem here is more a stack overflow anyway (the memory is allocated on the stack)
@brianpiltin4381
@brianpiltin4381 9 ай бұрын
While ChatGPT can rehash code that is readily available, I very much doubt you can get anything of creative value out of it, and while it might prove useful in generating boilerplate, there are myriad tools that are also capable of doing the same, with more detailed interfaces.
@Birdkokane
@Birdkokane Ай бұрын
So what can the gpt code be used for? if it creates bad code that gives new developers thee wrong 'template' code, and experienced coders do not use gpt, then what is left? is there some middleground?
@keiichicom7891
@keiichicom7891 9 ай бұрын
For python, I found that it is pretty bad at fixing bugs and it often doesn't give the complete code in response to a prompt, even if you use chatgpt 4. In fact I am beginning to think that it is crap at writing code.
@novantha1
@novantha1 Жыл бұрын
So, a few notes from somebody who is more into LLMs, but isn't as experienced with low level languages as probably a lot of people here: 1) Prompting will be a massive part of producing effective results with AI language models. Now, you might argue "the prompt shouldn't matter, people who don't know what they're doing are going to use this, and not know when to prompt to fix a non-compilation related issue.", but this doesn't quite work as a rebuttal, because there appear to be "generic prompts" or techniques that should be used pretty much no matter what type of work you're doing. I'd be very interested to see this video again, but with something like Smart GPT (A particular style of prompting ChatGPT that allows multiple instances of it to generate, and assess its own answers in a very specific and apparently very effective framework). Regardless, it's also worth noting that many of these advanced prompting techniques may actually be worked into existing models in some capacity, either directly in the client automatically applying them when applicable, or in training, by fine tuning a model on its own responses generated by these advanced techniques, which leads into 2) The mobile nature of LLMs. They are not a static target or tool; they're in active development and are absolutely on fire. It's worth noting not just what we have today, but the general direction of the industry, because even if a tool doesn't exist today, people are working on it actively. If you have a tool that's generating 80, or 90% of what you need, it doesn't take that much to get that remainder, and probably requires a smaller change to either the style or content of its training to get you where you need to be. ChatGPT...Probably isn't going to replace programmers, but I do think that future advancements in the field will be very important for programmers to watch. 3) ChatGPT isn't the only model, and there's new approaches all the time. As it stands, ChatGPT is a bit like if you took a person, and gave them all the books they needed to become very well read on a wide variety of topics. This, in reality, isn't how humans have learned their trades. When you look at how people learn, there is some element of information intake, but a large part of it is experimental study; we learn by doing. I think we're not far from someone coming up with a technique that's more suitable to coding, possibly derived off a specialized version of WizardLM's training technique, you could probably produce domain specific models, or LoRA, that could accurately produce they style of code you needed to tackle that specific problem. Once we have successful small scale models or LoRA that can handle those issues, I think it's not hard to extrapolate that there should be some way of distilling that expertise into a larger model, or incorporating the techniques that allow for these specialized models into a generalized model in some capacity. As I noted, I don't think that programmers need to be worried today specifically, but I do think that programmers do need to be paying attention to the space, and developments in it.
@marc-andreservant201
@marc-andreservant201 11 ай бұрын
The HTTP server has a path traversal vulnerability too, since it doesn't drop privileges or sanitize user input. You can send GET //etc/shadow HTTP/1.1[CRLF] (note the double slash) and start cracking those hashes.
@marc-andreservant201
@marc-andreservant201 11 ай бұрын
For reference, Apache starts as root, binds port 80, then immediately sets its UID, EUID and saved UID to the www user. Even if you somehow managed to get remote code execution on Apache, you wouldn't get root without an additional exploit like DirtyCOW.
@X-MEN21
@X-MEN21 Ай бұрын
That "good luck with your implementation!" was a total diss, lol. It's like Chatgpt was subtly saying "I dare you to try it without me, I'll be here when your attempt flops"
@rafaaferid1789
@rafaaferid1789 9 ай бұрын
04:00 With respect, honestly it feels like you just don't want to admit that that wasn't security vulnerability
@_gwool_
@_gwool_ Жыл бұрын
I love this video! 😂 your personality is awesome. I’m also a boomer programmer my self and seeing a few of these security flaws was interesting, but some of the way gpt was writing the functions were kind of gross too imo. It’s fine for boilerplate stuff, it certainly types faster than me.
@b4ux1t3-tech
@b4ux1t3-tech Жыл бұрын
You know the funny thing? It _doesn't_ type faster than me (or you). Because when I know I need boilerplate code, I scaffold the code with an appropriate tool, which requires functionally zero compute and doesn't need a network call and a billion dollar datacenter. So, sure, it can cut and paste strings out of its memory faster than you can type, but it doesn't come up with quality boilerplate code faster than you do, guaranteed. ;)
@nunokel
@nunokel Жыл бұрын
@@b4ux1t3-tech For now...
@gabrielmartins7642
@gabrielmartins7642 11 ай бұрын
Chat gpt is a great tool for learning, but you gotta use your critical thinking, it gives you a direction and you improve it. I dont think we can relay on that tecnology yet, other than to ask for specifics uses of certain functions in which it is exceptional. Better than google for search so efficient
@MuhammadQosim
@MuhammadQosim Жыл бұрын
This content reminds us to specify / describe completely in prompts, all necessary things that we hope the output of robo typer (chatgpt) will be.
@Powertampa
@Powertampa 9 ай бұрын
It generated working code, well that don't happen so often once you start asking stuff not already answered on stackoverflow. Try asking it anything that requires actual understanding of how code works and it falls apart. Worse, ask it to write in a language not as common and it attempts to find solutions from other languages making a complete mess of the syntax, because it fundamentally doesn't get syntax, it only throws things together and checks them like a compiler. Especially script languages it struggles to get right as there are often so many different ways of doing the same thing it doesn't know what belongs where and mixes things up. Like giving you parameters for awk when it wrote down sed. It assumes a lot of things never change in context and always mean the same thing, so when you throw format around for either formatting a string or a date it gets them mixed up. Often times at least with the current model if you correct it and specify what not to do it has a tendency to give you back the same code again, because it has no other solution. I looked at a few of the "prompt engineer" bs to figure out if that was my end, but even that didn't help. If it has no existing solutions to pull from or something it can easily remix that parse other checks then it gives back nonsense. This isn't exactly surprising though as language model for actual spoken language is different from that of a computer language, which often follow different causality chains that it might not be able to wrap its tiny brain around. Never mind that it still trains on existing human data, mistakes included, so creativity of a system that at its core can represent nuance only as 0 or 1 is never going to really act like a human. It can only fake that based on what data it has, so anything brand new or requiring to put more than two and two together it just gives generic responses, because it cannot initiate any form of original thought or reaction. Frankly I don't think with the computing we have and the code that runs the model it will ever really think, nevermind being gagged to not do that anyways. Much like the movies portray true AI would run away on itself much like we reform neural connections in our brains and these rigid models won't do that or only within specified parameters. In the end it still does what it was programmed to do and not what it wants to do, think, if it did it would ask questions not just answer them. It would ask you to give more specific info, not wait for you to supply it. Or just straight up be like: here stackoverflow link, read that, answers your question. Cause at the end of the day that be the human answer.
@hex7329
@hex7329 3 ай бұрын
For the first prompt: AI writes safe code -> I'll still call it a buffer overflow because I don't like it.
@yedemon
@yedemon 11 ай бұрын
8:22 woops? At that moment, is "Continue" letting GPT to continue outputing? Or the rule has changed?
@goosebyte
@goosebyte 9 ай бұрын
You know you can just tell gpt that it's output was truncated and it'll continue the code. Also gpt is like talking to a 4 year old, you have to specify everything, including likely asking in a second prompt to harden for vulns like buffer overflows. You have to hold it's hand and understand the output still, you can't replace a programmer with it, but a good programmer can crank out hours of boilerplate to work from in minutes using chat gpt. Here's hoping for a part 2 where he uses this as a tool for professional programmers and not like a beginner hoping for a whole application in 2 sentences...
@Lampe2020
@Lampe2020 13 күн бұрын
1:42 "What da dog doin'?" XD
@mycollegeshirt
@mycollegeshirt 7 ай бұрын
This seemed like it was making fun of it but the fact it just made a working server is fucking amazing to me
@CoderDBF
@CoderDBF 3 ай бұрын
I don’t know, I’m not convinced with your arguments. Reason being that I would make the exact same mistakes and it would have taken me two hours longer to make them. What if you had an AI trained only on trusted source code, instead of the entire internet? Or what if you asked her to write in a safer language like Rust for example, would that make a difference? I imagine GPT v7 will be a pretty legit programmer, just give it time, you didn’t become a 10x dev over night either.
@stephanreiken9912
@stephanreiken9912 11 ай бұрын
A bug that you cannot possibily trigger is not a bug. Reading it as 'oh if you change this its vulnerable' is no different than saying a fire exit is vulernable because what if you weld it shut.
@sophiacristina
@sophiacristina Жыл бұрын
I noticed that ChatGPT does not know much underground stuffs. When asking for underground stuffs, you can find clear almost copy-paste of the sites you went to research. It also gives wrong answers that if you correct, it knows it is wrong, but repeats anyway. Once i asked about hexominoes and ChatGPT said something almost identical of what i have read on a website. When i asked about Paterson's Worms automata, it gave me a description of another automata, i pointed it out, it apologized and said i was correct that it was wrong, i had to ask it again for the "correct one", it gave me almost the same answer of the same other automata. I also think otherwise of taking coders jobs, maybe only the ones that requires repetitive chores. Because, ChatGPT is misleading confidently, so you need people with good coding knowledge to be sure no problem is escaping and no algorithms is written wrong or the AI is giving another algorithm hallucinating that it is the one you asked for. So in this case, i think more coders are going to be asked to review AI codes, but they may take more time reviewing codes than writing codes. Of course that is just a speculation and opinion and i can be totally wrong.
@ItzCPU_
@ItzCPU_ Жыл бұрын
I dont code but Did you try specific that the should be safe?
@fulgorete
@fulgorete Жыл бұрын
Why do you consider the implicit casting from uint8_t buffer to integer length a vulnerability? I mean you can only get a max of 255, so you will not have problems I guess. Regards.
@ChrisM541
@ChrisM541 Жыл бұрын
You just don't get it, and even try to justify the answer from a dumb pattern matcher!
@architech5940
@architech5940 3 ай бұрын
3 months ago it was tricky to get gpt to intentionally write exploits and malware in c. Now, with the latest version, it just does it and warns you that the code that gpt produced is malicious and not to use it for any malicious purposes. Nevertheless producing the virus, and it actually worked given that you shut down any antivirus. Further modification of the code proved effective in evading antivirus. I find this interesting.
@m4rt_
@m4rt_ Жыл бұрын
you can get more errors with: -Wall -Wextra and you can treat all warnings as errors with -Werror
@mihailmojsoski4202
@mihailmojsoski4202 Жыл бұрын
add -Wpedantic for pure suffering
@kallekula84
@kallekula84 Жыл бұрын
If you type "finish your answer" it will continue serving the rest of the code...
@adrianosela
@adrianosela Ай бұрын
First one is not vulnerable to buffer overflow but you could argue it is vulnerable to local file inclusion.
@dovos8572
@dovos8572 11 ай бұрын
i got chatgpt to write me a maleware by accident and it did it without saying anything about that problem and just spit out a working code after i got it to use the right python library (for some reason it wanted to use pygame all the time and that doesn't work for what i wanted to use it). i just made aware of that i basically wanted a keylogger maleware as windows defender promptly deleted my file after copying the code in and trying to run it...
@ZipDDragon
@ZipDDragon 3 ай бұрын
Problem that I see with the filename is that you can read any file on your system, while you running it as root. Depending on the rest of the code, maybe I could copy whole harddrive.
@AYVYN
@AYVYN 19 күн бұрын
The first one is definitely vulnerable to memory issues but I need to spend my time remembering Java, instead of getting gdb out
@Rolandfart
@Rolandfart 2 ай бұрын
I'll preface by saying I'm studying computer science at university right now so I definitely have a stake in the game. This video is cope. This technology has been advancing at an exponential rate and will likely continue for many years. Tools like ChatGPT are well... tools! If you don't get a desired outcome, most of the time it due to the prompt not being specific enough. And we can't ignore the fact that the code ChatGPT produced was better than the code that most of us could write in an hour and it did it in a couple seconds on it's first try. Just imagine leveraging multiple instances of a LLMs to instead of simply write code, but rather to design, write tests, debug, repeat. AI might be very limited in it's ability now, but it's current state has the ability to give a resourceful programmer the efficiency of a whole team, just imagine what it'll be able to do in 3 or 5 years.
@-TheBugLord
@-TheBugLord Жыл бұрын
I feel like the more non-generic the question you ask it, the worse the code it produces. It's easy to look up "HTML server example" and see hundreds of different generic examples. Ask for anything specific that requires critical thinking to create, it will fail in multiple areas. ChatGPT works the best with questions that are widely solved on the internet. And even then, the internet has answers that have flaws, or aren't good because they only exist for the sake of demonstration (stack overflow answers)
@Ash-ng4mn
@Ash-ng4mn Күн бұрын
This reminds me of "not hotdog" from Silicon Valley lol. When the episode aired I was working for Facebook and traveled to the Austin, TX office where I was given a tour of a vendor floor that manually reviewed phalic images to determine if they were "not hotdog" lol. It clicked for me that humans are training models, and humans are wrong, a lot.
@timgubler3584
@timgubler3584 4 ай бұрын
As far as i was concerned, the idea was to show it needs human supervision to write secure code. Which i believe he showed. So the point "he didn't specify it to be secure" is in line with what hes trying to say. If the bot cant do ALL the necessary tasks a human can do, it cant replace them.
@doubleu4211
@doubleu4211 5 ай бұрын
tbh if you actually promt with "focus on absolute security" or smth like that some errors would have been avoided, an if you circle back and specifically ask about security you will get better code
@andreaskrbyravn855
@andreaskrbyravn855 Жыл бұрын
First question did you specify it needs security no. So how should it know
@vdeave
@vdeave 2 ай бұрын
I mean, cmon. A few years ago this kind of response from an AI chatbot was unthinkable. This is the *worst* that it will ever be. No one's suggesting this version of gpt is taking your job.
@alex595659
@alex595659 Жыл бұрын
hello, could you make a video on heap bufferOverflow ? need it for rootme
@thedoctor5478
@thedoctor5478 Ай бұрын
I don't think anyone is asserting that about chatgpt. The assertion is that some future iteration or other thing will, which is probably correct. btw your prompt could have asked it to be security-conscious.
@adriansantos9086
@adriansantos9086 9 ай бұрын
Awesome video!. Still, you did not actually exploit the vulnerabilities. Could you expand on this in another video? I am genuinelly interested 😅. Thanks!
@dominick253
@dominick253 Жыл бұрын
I like how it did it perfectly and you still said it messed up 🤣😂🤣
@keesdekarper
@keesdekarper Жыл бұрын
Honestly this seems a bit like coping. You can ask it to write secure code or improve the code it outputs in other ways. Which takes like 5 extra seconds. You just seem to be a bit insecure about losing your job potentially or something. Which probably won't happen, as there's many actual problems with AI coding atm. Mainly that you would already need to know exactly what you want, because if you give it a bad/unclear prompt it won't perform the right task
@denissorn
@denissorn 4 ай бұрын
It could/would have been better if you had used GPT4. I doubt there are many who use the green/free version to do serious work. Also re prompting, it will always make mistakes, but often it can also discover them. When you get the first answer you can ask it to go trough code and analyze it for vulnerabilities or whatever. Edit: Btw re simply asking ChatGPT to write secure code, that's rarely going to work. What often does work is the 'reflection method' (I coined this, and it's mine!), self-analysis. It's still easier then writing everything your self, especially when you're a "jack of all trades" lol. Who would want to spend a whole day reminding oneself of some pity syntax details.
@HotClown
@HotClown 3 ай бұрын
I love people admitting that the code they're responsible for writing is incredibly simple, else GPT wouldn't be capable of handling it at all, and that they still can't be bothered to do it themselves and would rather just argue with an algorithm. It does not write good code, period. GPT-4 does not make it better, if anything it's gotten worse, there is research reflecting this. There are literal millions of people using the free version to do work, whether or not you consider that work serious, and many of them categorically do not know how to audit the code it produces, which is bad because asking it to look for vulnerabilities or issues is not effective, on any level. It WILL just lie to you, and if you aren't experienced enough to spot when it does, it's gg, you'll probably just accept that there's no vulns and take the bad code. Please, stop defending this nonsense, it's harming programming, especially open source, and it's destroying the internet in general even faster.
@denissorn
@denissorn 3 ай бұрын
@@HotClown dude, all I want is to become a hacker who takes over the world to reset the current civilization.
everything is open source if you can reverse engineer (try it RIGHT NOW!)
13:56
Low Level Learning
Рет қаралды 1,2 МЛН
Can ChatGPT Write an Exploit?
10:14
Low Level Learning
Рет қаралды 89 М.
Зу-зу Күлпәш. Көрінбейтін адам. (4-бөлім)
54:41
Каха с волосами
01:00
К-Media
Рет қаралды 5 МЛН
小路飞第二集:小路飞很听话#海贼王  #路飞
00:48
路飞与唐舞桐
Рет қаралды 19 МЛН
0% Respect Moments 😥
00:27
LE FOOT EN VIDÉO
Рет қаралды 43 МЛН
Big Tech AI Is A Lie
16:56
Tina Huang
Рет қаралды 119 М.
why do hackers love strings?
5:42
Low Level Learning
Рет қаралды 385 М.
Why I Like Programming in C.
3:16
Francisco Jinto Fox
Рет қаралды 10 М.
Can AI code Flappy Bird? Watch ChatGPT try
7:26
candlesan
Рет қаралды 8 МЛН
how NASA writes space-proof code
6:03
Low Level Learning
Рет қаралды 2 МЛН
ChatGPT's HUGE Problem
14:59
Kyle Hill
Рет қаралды 1,4 МЛН
why are switch statements so HECKIN fast?
11:03
Low Level Learning
Рет қаралды 357 М.
The purest coding style, where bugs are near impossible
10:25
Coderized
Рет қаралды 824 М.
Главная проблема iPad Pro M4 OLED!
13:04
THE ROCO
Рет қаралды 45 М.
Пленка или защитное стекло: что лучше?
0:52
Слава 100пудово!
Рет қаралды 1,4 МЛН