NVIDIA'S NEW OFFLINE GPT! Chat with RTX | Crash Course Guide

  Рет қаралды 83,274

TroubleChute

TroubleChute

Күн бұрын

Nvidia has released their new private GPT chatbot called to Chat with RTX. This quick video shows you how to download, install and use it. It's very simple and super powerful. You can ask the AI about documents, folders, PDFs, Docs, Videos and more! By the end you should know how it works and how to use it.
Download Chat with RTX: www.nvidia.com/en-us/ai-on-rt...
======== Related AI videos ========
Chat with RTX isn't the only powerful, offline, free software (Some don't even need GPUs!) See these videos for more info:
[CHAT] Oobabooga Desktop: • NEW POWERFUL Local Cha...
[IMAGE] Stbale Diffusion: • AUTOMATIC1111 SDUI One...
[VOICE] Applio: • BEST FREE TTS AI Voice...
Timestamps:
0:00 - Intro/Explanation
0:40 - Requirements to use Chat with RTX
1:30 - Downloading Chat with RTX
1:40 - Installing Chat with RTX
2:50 - Opening Nvidia RTX
3:12 - AI with Documents, PDFs and MORE!
4:14 - AI with KZfaq videos
5:27 - AI Model Default
5:37 - Is Nvidia Chat with RTX worth downloading?
#Nvidia #RTX #AI
-----------------------------
💸 Found this useful? Help me make more! Support me by becoming a member: / @troublechute
-----------------------------
💸 Support me on Patreon: / troublechute
💸 Direct donations via Ko-Fi: ko-fi.com/TCNOco
💬 Discuss the video & Suggest (Discord): s.tcno.co/Discord
👉 Game guides & Simple tips: / troublechutebasics
🌐 Website: tcno.co
📧 Need voiceovers done? Business query? Contact my business email: TroubleChute (at) tcno.co
-----------------------------
🎨 My Themes & Windows Skins: hub.tcno.co/faq/my-windows/
👨💻 Software I use: hub.tcno.co/faq/my-software/
➡️ My Setup: hub.tcno.co/faq/my-hardware/
🖥️ My Current Hardware (Links here are affiliate links. If you click one, I'll receive a small commission at no extra cost to you):
Intel i9-13900k - amzn.to/42xQuI1
GIGABYTE Z790 AORUS Master - amzn.to/3nHuBHx
G.Skill RipJaws 2x(2x32G) [128GB] - amzn.to/42cilxN
Corsair H150i 360mm AIO - amzn.to/42cznvP
MSI 3080Ti Gaming X Trio - amzn.to/3pdnLdb
Corsair 1000W RM1000i - amzn.to/42gOTGY
Corsair MP600 PRO XT 2TB - amzn.to/3NSvwzx
🎙️ My Current Mic/Recording Gear:
Shure SM7B - amzn.to/3nDGYo1
Audient iD14 - amzn.to/3pgf2XK
dbx 286s - amzn.to/3VNaq7O
Triton Audio FetHead - amzn.to/3pdjIgZ
Everything in this video is my personal opinion and experience and should not be considered professional advice. Always do your own research and ensure what you're doing is safe.

Пікірлер: 274
@Tarangot
@Tarangot 5 ай бұрын
Just used Chat with RTX to summarize your video in about a minute worth of reading. What a crazy time to be alive. I'll leave your video running in a tab so you're credited for the view and watch time.
@BabySisZ_VR
@BabySisZ_VR 5 ай бұрын
lol
@GumboRyan
@GumboRyan 5 ай бұрын
Efficient AND considerate.
@looseman
@looseman 5 ай бұрын
It is reading from subtitle, not from Video.
@KIaKlaa
@KIaKlaa 5 ай бұрын
just used chat with rtx to create a thingmabob to make yo wife bald and yo dog fat, watch out m blud
@ekot0419
@ekot0419 4 ай бұрын
I have been doing that using Chatgpt for a long time already.
@MrErick1160
@MrErick1160 5 ай бұрын
Wow this is AMAZING. A non-cloud chat that we can use with our local documents!!! Freaking cool and very useful product, NVIDIA def knows what people need
@DrakeStardragon
@DrakeStardragon 5 ай бұрын
Uhh, they are not the first, but ok.
@merlinwarage
@merlinwarage 5 ай бұрын
LMStudio is out for almost 8 months what does the same and 10x more.
@KillFrenzy96
@KillFrenzy96 5 ай бұрын
Well we already have many solutions for this. It's running Mistral 7B which has been available for many months now. It's nowhere near ChatGPT quality though. However if you have a 24GB GPU, I would suggest running the more powerful Mixtral 8x7B model using EXL2 3.5 bpw quantization. I use the oobabooga WebUI for this. It's about as powerful as ChatGPT free, but is much less restrictive.
@adrianzockt5347
@adrianzockt5347 5 ай бұрын
GPT4ALL also exists and supports multiple chats, like chatgpt does. However it crashes when reading large documents and doesn't have the youtube feature.
@chromefuture5561
@chromefuture5561 5 ай бұрын
And it adds finally another real reason for the 40. gen RTX cards
@tbarczyk1
@tbarczyk1 3 ай бұрын
Awesome tutorial! This is the first one of yours that I've watched, but between this one and few others I've looked at since, your tutorials are the best I've seen anywhere. Thanks for getting into all the interesting details and dumbing it down like your viewers are idiots.
@ashw1nsharma
@ashw1nsharma 5 ай бұрын
Thanks for this new discovery! Hope you're having a nice day! 🌻
@SB-KNIGHT
@SB-KNIGHT 5 ай бұрын
This is really cool and one of the biggest missing pieces in the whole equation. Being able to run these models locally and be able to highly curate your own will be very valuable. GPT4All is really neat, does a decent job with this as well, so I am glad to see something similar from Nvidia who makes the GPUs. Crazy times!
@no_the_other_ariksquad
@no_the_other_ariksquad 5 ай бұрын
It's really useful when you have a folder full of documentations for different apis and all things, very good for that.
@19mitch54
@19mitch54 5 ай бұрын
After exhausting the free trials of DALL-E and Midjourney, I bought my new computer with the RTX3070 to run Stable Diffusion. I love this AI stuff. Chat with RTX was a LONG download and it downloaded more dependencies during install but was worth it. I didn’t bother exploring the included dataset and started with my own documents. This works great! I want to build a big library of references and put this thing to work.
@jimmydesouza4375
@jimmydesouza4375 5 ай бұрын
How good is it for automatically generating things? For example if you stick a bunch of PDFs for a roleplaying game ruleset and setting and then ask it to generate DM prompts from that, can it do it?
@19mitch54
@19mitch54 5 ай бұрын
I don’t know much about role playing games. The program is good at answering questions. I pointed it to some manuals including my car’s owners’ manual and it was able to answer technical questions like “how do I reset the service interval?” I want to test it with some microcontroller programming manuals next.
@Vysair
@Vysair 5 ай бұрын
@@19mitch54This is wicked. Your usage is hella perfect for programmer and alike
@AvtarSingh1122
@AvtarSingh1122 4 ай бұрын
Nice👌🏻
@amumuisalivedatcom8567
@amumuisalivedatcom8567 2 ай бұрын
@@jimmydesouza4375 i'm late but yup, consider using RAG (Retrieval Augmented Generation) to pass docs to the LLM.
@RedVRCC
@RedVRCC 2 ай бұрын
Thanks! I just downloaded and installed it but I'm not too sure how to get it running. Working with these complex LLMs is still new to me but I really want my own AI so your video really helps. I hope this runs well enough on my entry level af 3060. This seems simple enough. Will it at least remember everything it learned so I can keep training it more and more?
@IIHydraII
@IIHydraII 4 ай бұрын
Can you make a video about different presentation modes and how to set them? I’m trying to get my games to run in Hardware Composed: Independent flip, but I’ve only been successful when running games in non native resolutions and also forcing windows to use that resolution. If I try to run native, I end up with Hardware: Independent Flip. I’m aware the only difference between HWCF and HWI is that the former uses DirectFlip optimisations, but I can’t figure out why they’re not working at native resolution. Kinda stumped here. 😅
@elpideus
@elpideus 5 ай бұрын
Definitely much easier to set up compared to your average text-generation-webui, however still has a long way to go when it comes to features and control.
@minty87
@minty87 4 ай бұрын
would love to see a photo generator on it id definitely get on it in that case . nice video
@handsonlabssoftwareacademy594
@handsonlabssoftwareacademy594 Ай бұрын
Man, I really like your analysis great work. So ChatRTX can be used with any cpu and graphics card including Intel HD Graphics once there's sufficient RAM like 16GB?
@christerjohanzzon
@christerjohanzzon 28 күн бұрын
No, you need an RTX card from at least 3000-series. It's the tensor cores that is important. Luckily these cards aren't expensive.
@_B.C_
@_B.C_ 5 ай бұрын
Will it do this for yt videos in another language?
@EuropaeusOrigo
@EuropaeusOrigo 4 ай бұрын
Very cool!
@LaminarRainbow
@LaminarRainbow 5 ай бұрын
Thank you!!
@johncollins9263
@johncollins9263 Ай бұрын
I am having an issue with installing this as it comes up with chat with rtx failed to install however hardware is not an issue as everything i have is new but it decides not to work?
@user-uw9ir7fl8l
@user-uw9ir7fl8l 2 ай бұрын
yup a solid demo for an intro with your pc and an Ai model thats local
@girinathprthi
@girinathprthi 5 ай бұрын
interesting started downloading this app
@Jascensionvoid
@Jascensionvoid 5 ай бұрын
I keep getting this error when trying to upload some PDF's into my Dataset. [02/23/2024-19:42:28] could not convert string to float: '98.-85' : Float Object (b'98.-85') invalid; use 0.0 instead
@MTX1699
@MTX1699 4 ай бұрын
So, is there a solution to this?
@hairy7653
@hairy7653 4 ай бұрын
the KZfaq option isn't showing up on my rtxchat
@leeishere7448
@leeishere7448 5 ай бұрын
How can I get the lama 13b model? I don't have it.
@invisisolation
@invisisolation 5 ай бұрын
I’m curious… If you’re comparing between models with the same amount of VRAM (e.g. 3050, 3060 8GB, 4060) will the quality of the outputs improve if the card is better or will it only just have a faster/slower response time?
@ahmetemin08
@ahmetemin08 5 ай бұрын
no, only the interference speed will differ.
@Embassy_of_Jupiter
@Embassy_of_Jupiter 5 ай бұрын
if it's the same model, not running with lower precision, it shouldn't make a difference in quality.
@Unknown-xm8ll
@Unknown-xm8ll 5 ай бұрын
See the weights in a neutral network are present by Nvidia so no change in response the model is fitted with the most optimal neutral weights which determine the accuracy and precision of the model. A better faster GPU like 4070, 4080 or the 4090 can improve the speed of the results but the jump till 4080 is not significant. Only 4090 performs faster and more noticeable compared to other GPUs. And fun fact you can run the chat with RTX on AMD gpu 😂 with slight tweeks or just copy the model data and paste it into the lalama interface.
@PrintScreen.
@PrintScreen. 5 ай бұрын
@@ahmetemin08 isn't it "inference" ?
@ahmetemin08
@ahmetemin08 5 ай бұрын
@@PrintScreen. you are correct
@IzanamiNoMikotoo
@IzanamiNoMikotoo 5 ай бұрын
The reason Llama 2 doesn't show is that it "requires" 16GB of VRAM. It will only let you install it if your card has at least 16GB... Unless you change the setting in the llama13b.nvi file. If you set the value to, say, 10GB then you can run it on a 3080 10GB. Idk if it will work perfectly but you can try.
@codeblue6925
@codeblue6925 5 ай бұрын
where is that file located?
@codeblue6925
@codeblue6925 5 ай бұрын
nvm i found it
@crobinso2010
@crobinso2010 4 ай бұрын
@@codeblue6925 Did it work? I have a 12GB 3060
@rockcrystal3277
@rockcrystal3277 4 ай бұрын
how do you change the setting in the llama13b.nvi file to 10gb for it to work?
@IzanamiNoMikotoo
@IzanamiNoMikotoo 4 ай бұрын
@@rockcrystal3277 Go to the file llama13b.nvi located in the installation directory “\NVIDIA_ChatWithRTX_Demo\ChatWithRTX_Offline_2_11_mistral_Llama\RAG”. Then change the "MinSupportedVRAMSize" value however many GB of VRAM your card has.
@ubaidfayaz1989
@ubaidfayaz1989 2 ай бұрын
Sir how can we bypass the nvidia check that occurs prior to installation?
@KenZync.
@KenZync. 4 ай бұрын
i just download this and it can't be run can you try remove and redownload it ? i think nvidia cooked something failed
@yuro1337
@yuro1337 5 ай бұрын
it looks like Whisper AI with chat and some additional models
@arsalanganjeh198
@arsalanganjeh198 5 ай бұрын
Nice
@abdiel_hd
@abdiel_hd 3 ай бұрын
Mine didn't come with KZfaq as a dataset/source... can someone help me? I have a laptop with a 3070
@TheMangese
@TheMangese 3 ай бұрын
I'm interested in having an interactive AI chatbot in my chat channel on Twitch. Can this do that?
@shadowcaster111
@shadowcaster111 5 ай бұрын
is the non C drive install fixed yet ? I tried it on my P drive and it failed to install
@Green_Toast
@Green_Toast 4 ай бұрын
no, badly not, they talked about it at the nvidia forum
@jackflash6377
@jackflash6377 4 ай бұрын
I just installed it to my F: drive under a folder named RTXChat and it's working as normal.
@monkshee
@monkshee 5 ай бұрын
hey man i don't see the llama option when installing i already have an install how would i add it to the list of models?
@haseef
@haseef 5 ай бұрын
same issue here even though I ticked clean install
@N1h1L3
@N1h1L3 5 ай бұрын
win 10 ?
@zslayerlpsfmandminecraftan367
@zslayerlpsfmandminecraftan367 4 ай бұрын
llama 2 needs 16gb vram not quantizized, so if you have 8gb it doesn't install it
@TonTheCreator
@TonTheCreator 3 ай бұрын
I installed and used it bu after I closed it I can't use/open it again. I mean i don't know how to
@faa-
@faa- 5 ай бұрын
this is so cool
@moonduckmaximus6404
@moonduckmaximus6404 4 ай бұрын
THE KZfaq OPTION DOES NOT EXIST IN THE DROP DOWN MENU
@Jcorella
@Jcorella 5 ай бұрын
6:57 What was that model? Couldn't understand you
@zslayerlpsfmandminecraftan367
@zslayerlpsfmandminecraftan367 4 ай бұрын
oobabooga desktop, wich in itself is a gui similiar to this. but it lets you use custom models. but its more complicated to set up with python 3.10.9
@elgodric
@elgodric 5 ай бұрын
How many pages of the document can Mistral 7B handle?
@user-ky1jp7ev8b
@user-ky1jp7ev8b 4 ай бұрын
Until you run out of RAM and VRAM.
@Tore_Lund
@Tore_Lund 5 ай бұрын
System requirements are minimum requirements? Is Win11 needed or does Win10 work?
@Vysair
@Vysair 5 ай бұрын
Isnt Win11 are just Win10 under the hood? Why wouldnt it work
@lolxgaming7993
@lolxgaming7993 Ай бұрын
I tried downloading it but the download is too slow and this is normal?
@Vimal_S_Thomas
@Vimal_S_Thomas 3 ай бұрын
will it work on my laptop with RTX 2050
@mayorc
@mayorc 5 ай бұрын
Does it support custom models like using OpenAI api endpoint local servers?
@JA_BRE
@JA_BRE 5 ай бұрын
Its only Demo, no way it supports it yet...
@JoyKazuhira
@JoyKazuhira 5 ай бұрын
wow maybe in the future, this will be added in a game. will definitely use instead of turning on ray tracing.
@thanksfernuthin
@thanksfernuthin 5 ай бұрын
You finally made another video I'm interested in! 😃I was just on the verge of letting you go. My main interest is the AI stuff.
@TroubleChute
@TroubleChute 5 ай бұрын
Always happy to cover new stuff when I hear about it ~ A friend let me know of this. I also saw the new OpenAI video stuff... but nobody has access to that yet...
@RentaEric
@RentaEric 5 ай бұрын
You do know a subscribe is free. If you leave 10 others will replace you 😅
@thanksfernuthin
@thanksfernuthin 5 ай бұрын
@@RentaEric Unless he doesn't create content they want. You understand how consensual interactions work, right? Or do you have ten thousand subscriptions and you can't pick out what you want to see from all the crap?
@RentaEric
@RentaEric 5 ай бұрын
@@thanksfernuthin you act like you support him financially or even through liking every video and commenting. Do you? If not your opinion is irrelevant cause you are talking about leaving if he doesn't give you what you want but have you gave him anything besides taking his free content?
@thanksfernuthin
@thanksfernuthin 5 ай бұрын
@@RentaEric So it's a bad thing to give feedback in your mind? You think he doesn't want to know when people like what he does or doesn't like what he does? Have you ever produced something of value for another human being in your life?
@user-sl9op3gy5e
@user-sl9op3gy5e 3 ай бұрын
I don't have the youtube Url option
@erkinox1391
@erkinox1391 5 ай бұрын
I really don't get it; i have all of the requirements (VRAM, RAM, OS, Latest Driver, I got plenty of storage), but whenever I launch the installation, it stops and say Chat with RTX Failed and Mistral Not Installed
@jaderey467
@jaderey467 5 ай бұрын
Are you windows 11 it doenst work on 10
@ben9262
@ben9262 5 ай бұрын
I'm getting the same thing
@AlecksSubtil
@AlecksSubtil 4 ай бұрын
Disable completely your antivirus, also check the dock icon to disable it from there. avast for example has to be disabled in the tray icon, only in the gui is not enought. Also install it on the default folder. Maybe necessary run it with admin privileges. It is safe to install btw
@siddharthmishra8283
@siddharthmishra8283 4 ай бұрын
Waiting for your 12gb SUPIR version installation guide for A1111 Sdxl 😊
@arooman3194
@arooman3194 5 ай бұрын
Min 6:56, can not understand the tools you suggest, would you mind to post the link to that tools?
@carlossalgado9075
@carlossalgado9075 5 ай бұрын
same isue
@sky37blue
@sky37blue 3 ай бұрын
It is in the video description [CHAT] Oobabooga Desktop: • NEW POWERFUL Local ChatGPT 🤯 Mindblow...
@blitzguitar
@blitzguitar 5 ай бұрын
Can I use it to overclock my 3070
@IndieAuthorX
@IndieAuthorX 5 ай бұрын
I was excited to use this, but I got it up and running and things did not work so good. I realized that it wasn't technically made to run on Windows 10, according to the requirements page, and I think that might be why. I think that this kind of thing has potential, but I want a chatbot that is completely released for commercial use before getting too comfy with it.
@acllhes
@acllhes 5 ай бұрын
Windows 11 is one of the requirements listed
@IndieAuthorX
@IndieAuthorX 5 ай бұрын
@@acllhes yeah, I saw that after. I could have sworn I'd seen both systems. I might have read a non Nvidia page first and then just installed.
@fontende
@fontende 5 ай бұрын
i'm not sure what you mean "commercial", none of this allowed such by license, it's only allowed for research use by original llama license (except if it based on llama 2 where something allowed but limited by installations). If you want just chatbot right away - easiest way is LLAMAFILE by Mozilla, just click and it works, their small model container is kinda 1,5 Gb but can analyse images
@glucapav
@glucapav 4 ай бұрын
It is saying I don't have 8 GB of GPU memory. Is it checking my integrated GPU instead of my NVDA? How do I fix this? I'm using an Asus Pro Duo so the BIOS isn't letting me change it.
@queless
@queless 4 ай бұрын
What card do you have?
@dioghane231
@dioghane231 2 ай бұрын
I have a 3050 rtx and it won’t let me install it? Why?
@banabana4691
@banabana4691 5 ай бұрын
i think its make nvidia graphic crad more valuable
@LaminarRainbow
@LaminarRainbow 5 ай бұрын
Originally I thought it didn't work, but turns out I just have to wait.. :P
@SpudHead42
@SpudHead42 5 ай бұрын
Does it support other models, like Mixtral?
@zslayerlpsfmandminecraftan367
@zslayerlpsfmandminecraftan367 4 ай бұрын
at the current time no... for that you need a gui like oobabooga or KoboldCPP wich supports custom models
@MiNombreEsEscanor
@MiNombreEsEscanor 5 ай бұрын
I downloaded this, it works pretty good locally, but I want to create a web application and use this chatbot in my application. Currently chat with rtx doesn't offer api to send questions and retrieve answers. Is there any way to achieve this? Or maybe they will add api feature in the future? What do you guys think?
@Hypersniper05
@Hypersniper05 5 ай бұрын
Text generation webui
@voidsh4man
@voidsh4man 5 ай бұрын
at scale it would cost you more to run an ai chatbot on your own hardware than using openai's api
@anispinner
@anispinner 5 ай бұрын
Considering it runs a local node I suppose one of the folders should contain plain .js files, otherwise it might be packed as an electron which you can unpack and inject your API into.
@fontende
@fontende 5 ай бұрын
nvidia never made any great software, they're only hardware. Don't count on that, why do you think we use Afterburner made by MSI (why Nvidia can't made such tool is a puzzle), even this they could make a year ago by hiring any student on Ai faculty
@anispinner
@anispinner 5 ай бұрын
Puzzle? Why would you make an overclocking soft that goes against your business model? Your goal (as a business) should be to sell the product, not to extend its lifespan.
@jonmichaelgalindo
@jonmichaelgalindo 5 ай бұрын
Thanks for the video. Very informative. GPT4All and LMStudio are probably easier for most users though, and they support more models, more OSs, and more features. I wonder what NVidia thought was so special about this...
@NippieMan
@NippieMan 5 ай бұрын
Offline AIs can be useful since companies such as OpenAI place in very restrictive rules. While there are already programs that can do what NVIDIA is offering, most consumers are too stupid to set it up themselves
@AntonChekhoff
@AntonChekhoff 5 ай бұрын
Which GPU-accelerated model would you recommend? For translation for instance?
@bigglyguy8429
@bigglyguy8429 5 ай бұрын
Well I love Faraday and LM Studio, but getting it to understand my own docs is hard,
@jonmichaelgalindo
@jonmichaelgalindo 5 ай бұрын
@@AntonChekhoff I haven't done any translation. I use Mistral raw for my D&D solver system, and for creative writing (mostly for generating large lists, like a thesaurus but for abstract topics).
@crobinso2010
@crobinso2010 4 ай бұрын
I'm hoping for that too -- a comparison btw LM Studio and Chat with RTX, which do the same things.
@kathiravan_vj
@kathiravan_vj 5 ай бұрын
Does RTX 2060 super supports this with 16gb ram?
@xXXEnderCraftXXx
@xXXEnderCraftXXx 5 ай бұрын
Well no. Atleast not without some bypass programs.
@TazzSmk
@TazzSmk 5 ай бұрын
is RTX A4000 supported? should be Ampere generation card I believe
@skym1nt
@skym1nt 4 ай бұрын
yes, it can.
@KrishnVallabhDas
@KrishnVallabhDas 4 ай бұрын
i am getting this error ModuleNotFoundError: No module named 'torch' how to fix this??
@CindyHuskyGirl
@CindyHuskyGirl 4 ай бұрын
pip install torch (put this into your terminal)
@OpenAITutor
@OpenAITutor 4 ай бұрын
You should go through the installer. It has all the stuff build in. It also creates it's virtual python environment in a folder called env_vnd_rag
@ahmetrefikeryilmaz4432
@ahmetrefikeryilmaz4432 5 ай бұрын
One question: is that HHKB I have been hearing?
@andru2260
@andru2260 5 ай бұрын
wdym HHBK?
@jomymatthews
@jomymatthews 5 ай бұрын
What is Ub boo boogie desktop ?
@rickybobbyracing9106
@rickybobbyracing9106 5 ай бұрын
Wondering that same thing myself
@Subarashi77
@Subarashi77 3 ай бұрын
they removed youtube url option
@violentvincentplus
@violentvincentplus 5 ай бұрын
35GB goes crazy
@Flashback_Jack
@Flashback_Jack 5 ай бұрын
About the same size as a triple A game.
@pedro.alcatra
@pedro.alcatra 5 ай бұрын
Exactly. The size is absolutely fine. The problem is having to download it thru the browser instead of a download manager
@arsalanganjeh198
@arsalanganjeh198 5 ай бұрын
Lighter than cities skylines 2😂
@gamingballsgaming
@gamingballsgaming 5 ай бұрын
​@pedro.alcatra im fine with that for archival purposes. If i want to install it in the future, i can as long as i have the exe, even if the nvidia servers shut down
@Javier64691
@Javier64691 5 ай бұрын
@@Flashback_Jackan old triple a, most nowadays are 60gb plus
@Lp-ze1tg
@Lp-ze1tg 5 ай бұрын
How slow will it be if I run it with 4gb or even 2gb vram?, Will it even run with less than 8gb vram?
@Baconator119
@Baconator119 5 ай бұрын
It requires a 30 or 40 Series GPU, the weakest of which iirc is a 3050 with 6GB of VRAM. So, will it run with less than 8? Yeah. It might be slow, though.
@MARProduction24434
@MARProduction24434 4 ай бұрын
Tried it. The installer just block it if requirement not met ;(
@rockcrystal3277
@rockcrystal3277 5 ай бұрын
I noticed llama didn't install for you also, found anyway to install it?
@queless
@queless 4 ай бұрын
It requires an RTX card with 16gb vram or more
@rockcrystal3277
@rockcrystal3277 4 ай бұрын
@@queless how do you change the setting in the llama13b.nvi file to 10gb for it to work?
@queless
@queless 4 ай бұрын
@@rockcrystal3277 don't know, I have a 4070ti super OC 16gb, it worked for me without anything extra. Uninstalled it and hour later because the AI is super basic, like chatgpt 1 but dumber
@juanb0609
@juanb0609 4 ай бұрын
I dont have the option for KZfaq videos
@hairy7653
@hairy7653 4 ай бұрын
same here
@buttpub
@buttpub 5 ай бұрын
so why on earth would anyone choose this over for example ollama thru wsl on windows or even easier gpt4all? with this you only get one model, mistral, which is a good model but at 35 gb of download how could that possibly be the model file considering min req is 8gb of ram? so what other bloatware is there, the mistral model is only 7.4gb thru any of the freeware model query tools mentioned above or by just downloading the model and weights urself. Nvidia is once again late to the party and they forgot drinks
@anispinner
@anispinner 5 ай бұрын
Most of those that you mentioned use CPU for that easier setup, especially gpt4all. For the size of guess it's the dependencies and the ease that you can uninstall everything with one click as the most of it should be within one folder. Otherwise the user has to deal with pythons, condas and other reptiles. Hmm, maybe it also contains portable CUDA? Id have to give it a closer look as well.
@buttpub
@buttpub 5 ай бұрын
@@anispinner most of what i mentioned? gpt4all AND ollama BOTH have the options to do cpu or gpu depending on your setup. If you have gotten to the point of trying to f with llm's on your local pc, then you know how to open a terminal window.
@anispinner
@anispinner 5 ай бұрын
There is quite a difference between opening a console and clicking an install button.
@buttpub
@buttpub 5 ай бұрын
@@anispinner indeed, without context there is, but with context; and the fact that these are llm's, you need some basic understanding before you even embark on this. And people without any; are rarely at this point yet, and if they are then learn.
@cmdr.o7
@cmdr.o7 5 ай бұрын
I hope this software doesn't just snoop around your file system and documents, scraping it all back to nvidia with telemetry wouldn't be surprised at all if it did, people have little respect left for privacy if it turns out it does, well, just hope video author has done research and not just blindly enabling nvidia that said, we are each responsible for our own security and fighting back against invasive big tech, malware root kits etc
@Jet_Set_Go
@Jet_Set_Go 5 ай бұрын
They have Nvidia Experience for that already
@jordanturner7821
@jordanturner7821 5 ай бұрын
They already do that with telemetry data. he absolutely does know what he is talking about.@@jeffmccloud905
@cmdr.o7
@cmdr.o7 5 ай бұрын
@@jeffmccloud905 that's right, that is the troubling part clearly you don't know either or you would have enlightened us - but you are a man of few words scraping user data is not a big mystery, it happens everywhere, i think most people have a pretty good idea about that and i do actually know quite a lot about ai systems - and nvidia xD
@AndrewTSq
@AndrewTSq 5 ай бұрын
I think Microsofts AI already does that in Win11
@goldmund22
@goldmund22 3 ай бұрын
I'm glad I finally found someone commenting on the privacy aspect of this. Since you mentioned you are experienced with AI and Nvidia, do you think there is a good chance this is happening, even though it is "local"? I am considering using it for analyzing specific folders and PDFs related to my work. I guess the only way to be sure it doesn't also have access to everything else is to literally use this on a different PC and on a different network. I don't know. Then I think about Microsoft OneDrive, and well it already is connected most of everything we have on our PCs by default. Just insane.
@GKGames2018
@GKGames2018 4 ай бұрын
mine does not have youtube
@rionix88
@rionix88 5 ай бұрын
gemini will use this technology. you can chat with 1 hour video
@muruganmurugan507
@muruganmurugan507 5 ай бұрын
Its cool does it support 2gb single pdf with 4000 pages😂
@bensoos
@bensoos 5 ай бұрын
Now real interlegend bots in games.
@arsalanganjeh198
@arsalanganjeh198 5 ай бұрын
Us there any chance that use this with a 4GB graphics card?
@VGHOST008
@VGHOST008 5 ай бұрын
You can install oobabooga locally and use a relatively small model like Tiny-Llama 1B or some other 3B~ model. NVidia uses a 7B model (requires exactly 8Gb of VRAM at medium~ accuracy settings) as a low end solution so there is no way you'd be able to run it with decent performance on 4Gb of VRAM.
@galaxymariosuper
@galaxymariosuper 5 ай бұрын
a much better option is LM studio. there you can offload layers from the NN to the GPU as you wish. and the installation and usage is even easier than this RTX stuff
@VGHOST008
@VGHOST008 5 ай бұрын
@@galaxymariosuper Yeah, stablity is also an issue with LM Studio. It often crashes and the results it produces are very shallow. Same with GPT4ALL and any other relatively small client (kobold UI would be the only exception, it just crashes often).
@fontende
@fontende 5 ай бұрын
even better easier solution is Llamafile container by Mozilla, runs on Win 8 on very old hardware. I personally use obabooga but it's annoying how every new update breaks there previous function and not fixed for months, always back up these before updates
@mayday2011
@mayday2011 5 ай бұрын
I have 6gb vram 3060
@spicymaggi1853
@spicymaggi1853 5 ай бұрын
I only have 4GB vram (dedicated) is there any workaround for this?
@mascot4950
@mascot4950 5 ай бұрын
If you are not aware of LM Studio, then you might want to check that out as it doesn't require a GPU (but it does support using them, and you can partially offload however many layers the GPU has vram to hold). Assuming sufficient ram+vram, you can download and use the same model. But, there's no ability for ingesting local files as far as I am aware.
@vulcan4d
@vulcan4d 5 ай бұрын
This is a demo which clearly means Nvidia wants to see how many people will use it so they can release a subscription based service later for your AI offline needs.
@ozz3549
@ozz3549 5 ай бұрын
That's only UI for llama 2 model, you can find any another ui and this will work same
@gavinderulo12
@gavinderulo12 5 ай бұрын
​@@ozz3549it's also something you can build in a week.
@XiangWeiHuang
@XiangWeiHuang 5 ай бұрын
can we make a erotic roleplay chatbot with this? I use openai API solely for those.
@OpenSourceGuyYT
@OpenSourceGuyYT 5 ай бұрын
Yea. With Ollama, you don't need to have an RTX GPU. And it's offline too.
@mr.bekfast9744
@mr.bekfast9744 5 ай бұрын
Am I the only one that is downloading this and Setup.exe is not in the Zip file?
@victornpb
@victornpb 5 ай бұрын
same problem, zip seems corrupted
@pillowism
@pillowism 5 ай бұрын
Same issue here
@0AThijs
@0AThijs 5 ай бұрын
For many 😔
@mr.bekfast9744
@mr.bekfast9744 5 ай бұрын
@@victornpb Okay good to know that im not the only one. Is there anyway for us to report it or get an older version where the zip isnt messed up?
@boro057
@boro057 5 ай бұрын
Pretty cool that the setup is so simple. I wonder if there’s any telemetry going on in the background. GeForce experience has loads which is why I avoid it.
@MaiderGoku
@MaiderGoku 5 ай бұрын
Answer this properly, download size and how much space does it take on your hard drive?
@IMABADKITTY
@IMABADKITTY 5 ай бұрын
35gb download size
@MaiderGoku
@MaiderGoku 5 ай бұрын
@@IMABADKITTY how much for rtx remix?
@083-cse-sameerkhan3
@083-cse-sameerkhan3 5 ай бұрын
does it will work on GTX 1650
@zslayerlpsfmandminecraftan367
@zslayerlpsfmandminecraftan367 4 ай бұрын
nope 30/40 series only
@carsfan9648
@carsfan9648 5 ай бұрын
zip corrupted?
@0AThijs
@0AThijs 5 ай бұрын
It seems... 😢 35GB!
@aalejanddro2328
@aalejanddro2328 5 ай бұрын
there is a fix?
@carsfan9648
@carsfan9648 5 ай бұрын
Is it because I have windows 10?
@0AThijs
@0AThijs 5 ай бұрын
@@carsfan9648 no, should be fixed, I haven't tried it, redownload 🥲
@Xandercorp
@Xandercorp 5 ай бұрын
So how private is it?
@notram249
@notram249 5 ай бұрын
Very Since it runs on your pc
@flurit
@flurit 5 ай бұрын
Nvideas really making me regret getting an amd card
@Waldherz
@Waldherz 5 ай бұрын
Downloading dependencies for hours and hours and hours. Zero network activity. Anti virus checked, admin mode checked, network checked. No user error.
@CrudelyMade
@CrudelyMade 5 ай бұрын
6:57 using WHAT kind of desktop? lol
@punkrocklover334-fl1qq
@punkrocklover334-fl1qq 5 ай бұрын
the ogyboogy desktop 🤣🤣🤣
@sriaakashsrikanth8622
@sriaakashsrikanth8622 4 ай бұрын
Nvdia getforce gtx 1650 can be used ?
@heyguyslolGAMING
@heyguyslolGAMING 5 ай бұрын
What is the fastest animal on the planet?
@DeepThinker193
@DeepThinker193 5 ай бұрын
The slug.
@Spectrulight
@Spectrulight 5 ай бұрын
Idk probably a falcon
@N1h1L3
@N1h1L3 5 ай бұрын
@@Spectrulight The peregrine falcon is the fastest bird, and the fastest member of the animal kingdom, with a diving speed of over 300 km/h (190 mph).
@TenOfClub
@TenOfClub 5 ай бұрын
airborne Microbes👌👌
@bgill7475
@bgill7475 5 ай бұрын
Me when I need to pee
@Spengas
@Spengas 5 ай бұрын
That sucks that it is windows 11 only... never upgrading from 10
@nosinfantasia
@nosinfantasia 4 ай бұрын
anyone with installer failed , with no reason...
@OpenAITutor
@OpenAITutor 4 ай бұрын
This only works for RTX 4000 series min with 8GB of VRAM.
@_vr
@_vr 5 ай бұрын
Llama is Facebook's chat model
@blueyf22
@blueyf22 3 ай бұрын
my teachers will never know what hit em
@im_Dafox
@im_Dafox 5 ай бұрын
everything was fine until "windows 11" 😄 Shame, looks really cool and useful
@MousePotato
@MousePotato 2 ай бұрын
AI voice. Us Brits never say anyway with a plural.
@mhvdm
@mhvdm 5 ай бұрын
Very buggy, tested it myself and I must say I'm impressed, but darn they need to fix bugs. It was very bad at responding to stuff in general.
@andyone7616
@andyone7616 4 ай бұрын
Can you make a video on how to uninstall chat with rtx?
@NarbsWorldTV
@NarbsWorldTV 5 ай бұрын
it didnt chat
@paulocoelho558
@paulocoelho558 4 ай бұрын
File Size 35 GB? Why? 💀💀
@OpenAITutor
@OpenAITutor 4 ай бұрын
The two LLMs 14 GB and 8 GB .. Then NVIDIA installs mini conda and all the python libararies in a separate environment called env_vnd_rag 16 GB plus TensortRT_LLM for creating the enginees to work with your GPU
@itxaddict7503
@itxaddict7503 5 ай бұрын
C'mon Skynet. You need us to hand you the world on a silver platter?
@Ortagonation
@Ortagonation 5 ай бұрын
have dedicated tensor core for ai, but use rtx core instead. Kinda funny
@Vvilvid
@Vvilvid 4 ай бұрын
I have 4 pcs and none of them can run it 😭😭 Custom Pc1:amd Custom Pc2:amd Laptop1:rtx3050ti(4gb) Laptop2:amd
Crowdstruck (Windows Outage) - Computerphile
14:42
Computerphile
Рет қаралды 118 М.
You NEED the New NVIDIA App | BETTER Settings & NEW Features
10:48
TroubleChute
Рет қаралды 303 М.
КАК ДУМАЕТЕ КТО ВЫЙГРАЕТ😂
00:29
МЯТНАЯ ФАНТА
Рет қаралды 9 МЛН
39kgのガリガリが踊る絵文字ダンス/39kg boney emoji dance#dance #ダンス #にんげんっていいな
00:16
💀Skeleton Ninja🥷【にんげんっていいなチャンネル】
Рет қаралды 8 МЛН
Despicable Me Fart Blaster
00:51
_vector_
Рет қаралды 24 МЛН
One moment can change your life ✨🔄
00:32
A4
Рет қаралды 33 МЛН
NEW AI Jailbreak Method SHATTERS GPT4, Claude, Gemini, LLaMA
21:17
Matthew Berman
Рет қаралды 320 М.
Apple's Silicon Magic Is Over!
17:33
Snazzy Labs
Рет қаралды 990 М.
Getting started with Nvidia Chat with RTX
18:14
unknowntech
Рет қаралды 220
How and why I switched to Linux
12:22
Thomas Midena
Рет қаралды 176 М.
Big Tech AI Is A Lie
16:56
Tina Huang
Рет қаралды 252 М.
How NVIDIA just beat every other tech company
9:20
Mrwhosetheboss
Рет қаралды 1,3 МЛН
Adobe: A Disgusting, Criminal Company
10:21
Bull Technology
Рет қаралды 183 М.
Nvidia Drivers Are Becoming Open Source
8:38
Mental Outlaw
Рет қаралды 134 М.
NVIDIA CEO Jensen Huang Leaves Everyone SPEECHLESS (Supercut)
18:24
Ticker Symbol: YOU
Рет қаралды 867 М.
Adobe is horrible. So I tried the alternative
25:30
Bog
Рет қаралды 718 М.
КАК ДУМАЕТЕ КТО ВЫЙГРАЕТ😂
00:29
МЯТНАЯ ФАНТА
Рет қаралды 9 МЛН