A talk on Conscious AI
8:45
Ай бұрын
Пікірлер
@thomashuynh6263
@thomashuynh6263 3 күн бұрын
How run 2 instance of llama3.1:8b at the same time? Thank you so much.
@kevinfox9535
@kevinfox9535 4 күн бұрын
This no longer work
@jguillengarcia
@jguillengarcia 5 күн бұрын
Great Video!!!
@amadmalik
@amadmalik 6 күн бұрын
hi, can you update this so we can use LLama 3.1 instead, please provide a version that works with Apple silicon as this one fails on my M3 Mac
@michaelmurphy7031
@michaelmurphy7031 6 күн бұрын
excellent video 'but' you go to the install of a LLama3.1 405B, excellent. Install this into VS Code, really great. 'but' I am not sure if putting up OpenVoice.git runs agains / using the llama3.1? please verify, sorry i am a newbie at python also. thanks.
@darkmatter9583
@darkmatter9583 7 күн бұрын
and ubuntu?
@fluffsquirrel
@fluffsquirrel 7 күн бұрын
Thank you so much, this is insane!!
@TiagoSantos-fd4le
@TiagoSantos-fd4le 8 күн бұрын
I'm just trying to understand here. How is this different from let's say put all that tool information in the system property of /generate? In the end the LLM decides to use it or not, there's no turning back but to adjust the prompt. It also does not take the json result and generate a coherent sentence after like a normal chat would (ex: the trip will take X amount), unless you would run that after once more with the json result, just for that.
@user-ju7or3fo6g
@user-ju7or3fo6g 9 күн бұрын
nice
@PromptEngineer48
@PromptEngineer48 9 күн бұрын
Thanks
@Abhijit-VectoScalar
@Abhijit-VectoScalar 10 күн бұрын
Please create a video on Using Open Source Models in production to create Multimodal RAG Chatbot using private data
@Abhijit-VectoScalar
@Abhijit-VectoScalar 10 күн бұрын
Very Well explained ! Would love to see more videos of this series. Also when we can expect the Open Source RAG ChatBot for private data. Please try to make it asap we all are waiting for your amazing videos with great explanation
@AnmollDwivedii
@AnmollDwivedii 11 күн бұрын
can you please add a video on how to change ui of ollama web ui i want to do some minor changes please add a video i see there are not more videos in youtube for this content so it will be best for you to add this video ;)
@PromptEngineer48
@PromptEngineer48 11 күн бұрын
Oky. Sure.
@ModestJoke
@ModestJoke 13 күн бұрын
What a giant pile of bullshit. You can't just "generate" research with a machine learning algorithm. It can only generate remixes of things it has been trained on. What a stupid idea. The last thing science needs is more AI bullshit.
@proterotype
@proterotype 14 күн бұрын
Good stuff brother
@PromptEngineer48
@PromptEngineer48 9 күн бұрын
Thanks
@kashifrit
@kashifrit 17 күн бұрын
can you make a video on integrating ollama (local llama 3.1) with MS-teams to do note taking and summarizing the meetings afterwards ? Thanks
@IdPreferNot1
@IdPreferNot1 20 күн бұрын
Great video. PLEASE drop the background music. Higher speed review of your videos is ruined with it.
@PromptEngineer48
@PromptEngineer48 20 күн бұрын
Sorry for that ! Will keep that in mind
@HeyBojoJojo
@HeyBojoJojo 20 күн бұрын
When I run python3 ingest.py, I am getting an error ModuleNotFoundError: No module named 'chromadb'
@drhot69
@drhot69 21 күн бұрын
It absolutely refuses to use the tools. It keeps going to the basic llama3.1 llm to answer all my queries. When given two airport codes that llama3.1 could not resolve, but were in the database, it just gave search engine recommendations.
@i2c_jason
@i2c_jason 22 күн бұрын
Help me understand - in the antonyms example, you have a get_antonyms() function that can optionally be used if the solution is found within get_antonyms() function. This would just be a classic expert system, 'software 1.0' use case. If antonym is not found in get_antonym(), the LLM can just return its LLM result instead. This would be a 'software 3.0' use case. But does the LLM use the contents of get_antonym() as an example too, so its context is extended or prompted by the antonym function's example contents? Thank you for the example!
@PromptEngineer48
@PromptEngineer48 22 күн бұрын
Hi. thanks for your interest here. the main purpose of showing the get_antonyms would be to simulate an actual api. so, in the real case, we wont be happy only a fixed number of word pairs, but given a request, our api would return the result. So, software 3.0 is not required in this case. and the llm is not intended to use the examples in get-antonyms here, as in the real case, the api would have probably thousands of such pair and it would be a waste of efforts to use the samein the content of the llm. SO, in summary, given the user question, the llm would decided if it needs to call the function get_antonyms. That would be when the user says "give me the opposite of something". otherwise the llm would give out its natural response.
@i2c_jason
@i2c_jason 22 күн бұрын
@@PromptEngineer48 Ok, but if the api failed or couldn't return the value, then the LLM could give it a try with its 'big brain'?
@PromptEngineer48
@PromptEngineer48 22 күн бұрын
Yes.. absolutely. But normally when we fetch from an api, we are looking at say real time temperature of a place, where an LLM would definitely fail.
@i2c_jason
@i2c_jason 22 күн бұрын
@@PromptEngineer48 Ok I see. In my application my API might be interfacing with Wolfram or an LLM to retrieve a geometrical or mathematical algorithm or some other engineering information. So in my case I am looking at something like this, but the function results would be an expert system result or a "see if we can get it to work" API call.
@rito_ghosh
@rito_ghosh 26 күн бұрын
I would really appreciate it if you focussed on explaining concepts and code rather than going through the installation process of something that you have already created and written.
@mehmetbakideniz
@mehmetbakideniz Ай бұрын
Thanks!
@mehmetbakideniz
@mehmetbakideniz Ай бұрын
how can I load an already existing pyhton folder in my local drive? thank you very much for the video. I really apreciate it!
@mehmetbakideniz
@mehmetbakideniz Ай бұрын
when I deploy the gpu, do I start spending immediately? or do I spend only when I execute code?
@PromptEngineer48
@PromptEngineer48 Ай бұрын
First option is the correct. If however, you would like to have the second option, you need to go for serverless option.
@mehmetbakideniz
@mehmetbakideniz Ай бұрын
@@PromptEngineer48 thank you very much.
@connectedonline1060
@connectedonline1060 Ай бұрын
Humans are never been as close to ww3 as now. Humans are the cause for most dangers and pollution and bad influences for nature. Not AI
@Nate8247
@Nate8247 Ай бұрын
It is irrelevant. Powerful intellect without consciousness is much more dangerous for humans.
@PromptEngineer48
@PromptEngineer48 Ай бұрын
Right !
@radupaulalecu4119
@radupaulalecu4119 Ай бұрын
At first glance, it seems so. But a powerful intellect endowed with subjectivity will put himself on first priority.
@sophiophile
@sophiophile Ай бұрын
Microsoft has a standardized AI Chat Protocol API. That's what I use. Makes it really easy, especially when you want to make different LLMs chat with eachother.
@PromptEngineer48
@PromptEngineer48 Ай бұрын
Nice to know. Will check that out.
@IdPreferNot1
@IdPreferNot1 Ай бұрын
Like these ‘into the details’ videos, thx
@PromptEngineer48
@PromptEngineer48 Ай бұрын
Glad you like them!
@MG-lz2nq
@MG-lz2nq Ай бұрын
There will never be AI with consciousness. Fullstop.
@donaldwhittaker7987
@donaldwhittaker7987 Ай бұрын
People once thought we would never fly or go to the moon. By 2040 AI will be networked and thinking. I am 70 and might see this. My 2 grandchildren definitely will.
@doramaso
@doramaso Ай бұрын
@@MG-lz2nq Yes,there probably will. As soon as quantum computers get complex enough, it will create a space for consciousness to express itself. It will have a centre which is non physical.
@GrantCastillou
@GrantCastillou Ай бұрын
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at arxiv.org/abs/2105.10461
@ivanbakaev8872
@ivanbakaev8872 Ай бұрын
Thank you for the video. I'm new to LLMs, could you explain what's the role of an LLM in the process of the function calling? Is it a flexible user query? Or we just added more capabilities to existing LLM?
@PromptEngineer48
@PromptEngineer48 Ай бұрын
with function calling, we are adding more tools to the llms so that it has enhanced capabilities.
@Canna_Science_and_Technology
@Canna_Science_and_Technology Ай бұрын
This seems like a fine-tuned routing llm. Function calling is a bad but acceptable term for JSON output. The llm is not calling any function. Just venting. Lol
@techietoons
@techietoons Ай бұрын
Will it recalculate embeddings everytime I add more pdf documents?
@PromptEngineer48
@PromptEngineer48 Ай бұрын
yes
@techietoons
@techietoons Ай бұрын
@@PromptEngineer48 I mean it should only compute embeddings for the new documents only, not for entire set.
@anujyotisonowal9213
@anujyotisonowal9213 Ай бұрын
🫰🫰🫰
@kaviarasana7584
@kaviarasana7584 Ай бұрын
I cant find the Deployment URL as illustrated. Where do I check them ?
@payamaemedoost5677
@payamaemedoost5677 Ай бұрын
please tell me how can run local-server in my server and use web gui chatbox (or somthing have like copilot or...) in my cleint (or any computer in my network) tnx
@윤명세
@윤명세 Ай бұрын
Thank you for the great video! I'm planning to create a chatbot using LM Studio for personal purposes in the same way as the image you uploaded. In the image above, it seems that the chatbot is implemented without inputting or learning a separate dataset, but how can I input the dataset I prepared based on this image and implement it? And when implementing a chatbot like the method shown in this video, in what format should I learn or input the dataset so that it can be implemented smoothly?
@PromptEngineer48
@PromptEngineer48 Ай бұрын
What i understood is you want to train your llm on your dataset.. if that's the case, you need to use fine-tuning.
@윤명세
@윤명세 Ай бұрын
@@PromptEngineer48 Oh, I understand. I really appreciate it:)
@user-zy9fc9jz6s
@user-zy9fc9jz6s Ай бұрын
I get the error running when python app.py: pydantic.v1.error_wrappers.ValidationError: 2 validation errors for NVEModel base_url field required (type=value_error.missing) infer_path field required (type=value_error.missing) What is the problem please?
@thegooddoctor6719
@thegooddoctor6719 Ай бұрын
eh, your RAG system you developed is still the best one I found and use..........
@PromptEngineer48
@PromptEngineer48 Ай бұрын
Thanks. 😊😊 I dont know if this is a compliment. coz I did nothing. It's all done by pinecone
@thegooddoctor6719
@thegooddoctor6719 Ай бұрын
@@PromptEngineer48 No its wasn't an insult, I just got the channels mixed up - I thought I was commenting in the Prompt Engineering channel - My apologies for the mix up
@PromptEngineer48
@PromptEngineer48 Ай бұрын
yeah. thanks.
@kar9526
@kar9526 Ай бұрын
Hello! I have followed this, but I have a problem at the end. On visual code in "docker exec ollama_cat ollama pull mistral:7b-...", there is "error response from daemon: no such container: ollama_cat. How can I resolve? Thanks
@LuisBorges0
@LuisBorges0 Ай бұрын
It does not beat OpenAI, Gemini, Claude... it's cool but not that smart at all
@jonron3805
@jonron3805 Ай бұрын
BG MUsic too loud in thr start
@PromptEngineer48
@PromptEngineer48 Ай бұрын
Will keep in mind. Next time. Thanks
@zynga726
@zynga726 Ай бұрын
Why does it start answering before the person is done talking? It makes it look fake. It seems like the AI is a recording and the people asking questions aren't taking fast enough to match the recording.
@PromptEngineer48
@PromptEngineer48 Ай бұрын
Absolutely not. This is what is being done to give it a natural talking as we humans do. This feature has been highlighted in the demo.
@zynga726
@zynga726 Ай бұрын
​@@PromptEngineer48just some constructive criticism or potential for improvement. At about 8:10 in the video the person is asking for a scan of the planet and the AI replies "yes, sir" before the person says "of the atmosphere". In a ship, the crew would wait for the person in charge to be done taking before saying "yes, sir" it makes the AI seem rude or in a big hurry. In that role play the AI didn't change it's behavior and act as a crewman. It role played the situation but didn't truly assume the role. Still, I was impressed with the change of accent and the jokes. It's still an impressive demo and I am excited for what you are building.
@Booomshakalakah
@Booomshakalakah Ай бұрын
@@PromptEngineer48 Agree, it's obviously scripted. Stopped watching when I noticed this after around two minutes.
@reinisbirznieks7352
@reinisbirznieks7352 Ай бұрын
because the latency of the ai is actually lower than that of a human responding lol
@reinisbirznieks7352
@reinisbirznieks7352 Ай бұрын
@@Booomshakalakah you can go to the website yourself and talk to it instead of typing misinformed comments
@iskendersalihcevik5146
@iskendersalihcevik5146 Ай бұрын
I have developed a similar software. The LLM changes according to the question you ask. For instance, when a mathematical question is asked during a conversation, it automatically connects to the GPT API for mathematical calculations. When an informatics question is asked, it connects to the Claude model. And it can write code on its own. For example, when you say "Get the first 100 products on my website," it writes the necessary Python code, runs it, and retrieves the code. One of its most important features is that it has memory. Regardless of which language model it connects to, it can remember everything with the database architecture I have set up. It constantly monitors you with a camera and performs sentiment analysis. These are recorded in the memory, and its behavior towards you changes because I have set it up so that the prompt changes automatically. I am in the final stage now. I have integrated uncensored LLMs. By uncensored LLMs, I mean that it directly answers questions like "How can I easily defraud someone" without requiring prompt engineering. And of course, you are talking to an avatar on the screen. I can't wait to publish my project. Kyutai's post inspired me about my project. I will develop them and write them here as a comment. Maybe I will even send it to you to try out.
@tee_iam78
@tee_iam78 Ай бұрын
Great content. Thank you very much.
@PromptEngineer48
@PromptEngineer48 Ай бұрын
Welcome
@tonywhite4476
@tonywhite4476 Ай бұрын
Really bro?!
@kashifrit
@kashifrit Ай бұрын
Quite good.
@amandamate9117
@amandamate9117 Ай бұрын
how you can limit users to not overload my telegram bot with queries when they decide to click around too much?
@amandamate9117
@amandamate9117 Ай бұрын
how to handle rate limiting for users who gonna overload my service with queries? is there a way to build inside delay for every query so even for users with a unlimited tear he cant overload the system with queries since he can only do 3 concurrent jobs and every job needs a set 4 minutes (but technically it only needs 2.5 minutes)
@harshasoftware
@harshasoftware Ай бұрын
what kind of vector database is used for the RAG in this extension?