Пікірлер
@vimalaug15
@vimalaug15 8 сағат бұрын
need to more about table structure recognition
@denisgorbunov_yt
@denisgorbunov_yt Күн бұрын
Hello! does anyone know how to fix this errors? I am trying to deploy a project in google colab using GPU T4 downloading nougat checkpoint version 0.1.0-small to path /root/.cache/torch/hub/nougat-0.1.0-small config.json: 100% 557/557 [00:00<00:00, 2.16Mb/s] pytorch_model.bin: 100% 956M/956M [00:02<00:00, 380Mb/s] special_tokens_map.json: 100% 96.0/96.0 [00:00<00:00, 518kb/s] tokenizer.json: 100% 2.04M/2.04M [00:00<00:00, 173Mb/s] tokenizer_config.json: 100% 106/106 [00:00<00:00, 587kb/s] INFO:root:Output directory does not exist. Creating output directory. /content/myenv/lib/python3.10/site-packages/torch/functional.py:512: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3587.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] 0% 0/2 [00:01<?, ?it/s] Traceback (most recent call last): File "/content/myenv/bin/nougat", line 8, in sys.exit(main()) File "/content/myenv/lib/python3.10/site-packages/predict.py", line 167, in main model_output = model.inference( File "/content/myenv/lib/python3.10/site-packages/nougat/model.py", line 592, in inference decoder_output = self.decoder.model.generate( File "/content/myenv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/content/myenv/lib/python3.10/site-packages/transformers/generation/utils.py", line 1914, in generate result = self._sample( File "/content/myenv/lib/python3.10/site-packages/transformers/generation/utils.py", line 2648, in _sample model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) TypeError: BARTDecoder.prepare_inputs_for_inference() got an unexpected keyword argument 'cache_position' -> Cannot close object, library is destroyed. This may cause a memory leak!
@shubhamghait6644
@shubhamghait6644 2 күн бұрын
I want to extract the important information from medical record pdf and also overview summary of documents in json format. Could you please guide me, Which approach and model used for the same if possible share the resources or GitHub link ?
@RitheshSreenivasan
@RitheshSreenivasan Күн бұрын
Book a Topmate meeting. Link in channel description
@techthunder4832
@techthunder4832 5 күн бұрын
hi sir, can i do this same in amazon sagemaker,or in amazon bedrcok
@RitheshSreenivasan
@RitheshSreenivasan 5 күн бұрын
You should be able to do it
@manalkim200
@manalkim200 5 күн бұрын
hi sir, what about model interpretability have you ever use a model interpretability algorithm for image regression ?
@RahatRezaSulemani
@RahatRezaSulemani 5 күн бұрын
Do we need to crop always? if I dont want to crop, does it mean, we will have to fine tune?
@rajarams3722
@rajarams3722 8 күн бұрын
Thanks for sharing..Any online RAG interest group or community ?
@rajarams3722
@rajarams3722 8 күн бұрын
So basically according to that article, there is no case where RAG can stand alone..? This fine tuning is certainly a pain because of the resources needed, hyper-param tuning, more particularly the dataset preparation..
@javiergimenezmoya86
@javiergimenezmoya86 8 күн бұрын
Why human eval is not showed in llama 3 when it is well known. Why other benchmarks differs in that study respect previous benchmarks
@RitheshSreenivasan
@RitheshSreenivasan 8 күн бұрын
You need to ask the authors
@ankitbansal4288
@ankitbansal4288 13 күн бұрын
Amazing video , can you tell me how to get access of dataset ?
@RitheshSreenivasan
@RitheshSreenivasan 13 күн бұрын
You need to clear a test and also have an academic reference . Look at the video description for the links
@ojaskulkarni8138
@ojaskulkarni8138 23 күн бұрын
Hi, I was trying to use this code to inference my own model "ojas-kool/llama-3-8b-Instruct-bnb-4bit-ojas-kool" but somehow I was getting this error "Runtime Error loading model: Failed to create LLM 'llama' from '/root/.cache/huggingface/hub/models--ojas-kool--llama-3-8b-Instruct-bnb-4bit-ojas-kool/blobs/0e9bd2e56f419c58c8555d2375d6e3090a4b0368468219182414d5dc4df788d2'." I would really appreciate some help.
@ojaskulkarni8138
@ojaskulkarni8138 23 күн бұрын
Or please make a video on how we can save models as gguf and then get inference from them
@drmetroyt
@drmetroyt 25 күн бұрын
Is this using gpu ?
@RitheshSreenivasan
@RitheshSreenivasan 24 күн бұрын
Dont remeber as this video was made a long time ago
@drmetroyt
@drmetroyt 25 күн бұрын
My request to do a video on marker which is similar to nougut ... Request to use large size , multiple pages like 50-100 pdf file
@RitheshSreenivasan
@RitheshSreenivasan 24 күн бұрын
Noted
@sujithanagasuri6151
@sujithanagasuri6151 26 күн бұрын
sir can you make a video with some other dataset?
@arushini6379
@arushini6379 27 күн бұрын
After running the code chatbot is created but no responses are generated, What might be the issue? Getting a authentication error
@RitheshSreenivasan
@RitheshSreenivasan 26 күн бұрын
No idea as i did not have any issues. Check HF_TOKEN
@AruuData
@AruuData 26 күн бұрын
Is it because I don't have a paid L4 instance
@AruuData
@AruuData 26 күн бұрын
HF token was okay,
@RitheshSreenivasan
@RitheshSreenivasan 24 күн бұрын
@@AruuData No
@AruuData
@AruuData 24 күн бұрын
Change the runtime in to T4GPU , it worked ,What can we do if we need to deploy a website permenantly using this model? Can we do it using python anywhere? Or any other sources you know?
@hk-vv9qm
@hk-vv9qm 29 күн бұрын
Hi. Thanks for the video. I need the code of this paper and project. Would you please help me where can I get that? Thanks a bunch.
@RitheshSreenivasan
@RitheshSreenivasan 28 күн бұрын
Please check the description of the video for paper link
@hk-vv9qm
@hk-vv9qm 28 күн бұрын
@@RitheshSreenivasan Thanks for responding. There is no link to access the code of the project. I've downloaded the paper, but I need the code and implementation.
@RitheshSreenivasan
@RitheshSreenivasan 26 күн бұрын
I am not the author of the paper.Contact the authors
@yanayana-cm5qg
@yanayana-cm5qg 29 күн бұрын
what is the reason for the following error Exception: data did not match any variant of untagged enum PostProcessorWrapper at line 2395 column 3
@RitheshSreenivasan
@RitheshSreenivasan 29 күн бұрын
No idea
@jeremyclimbs
@jeremyclimbs 29 күн бұрын
getting an error on 1.1 Load text encoder, "ImportError Traceback (most recent call last) <ipython-input-24-186cdabda356> in <cell line: 3>() 1 from transformers import T5EncoderModel 2 ----> 3 text_encoder = T5EncoderModel.from_pretrained( 4 "DeepFloyd/IF-I-XL-v1.0", 5 subfolder="text_encoder", /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs) 3120 ) 3121 elif not is_accelerate_available(): -> 3122 raise ImportError( 3123 "Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install accelerate`" 3124 ) ImportError: Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install accelerate`" What do I do?
@karthiksreekannan
@karthiksreekannan Ай бұрын
Hi
@RitheshSreenivasan
@RitheshSreenivasan 28 күн бұрын
Hello
@anaghacasaba9351
@anaghacasaba9351 Ай бұрын
hi, can you please make a video on fine-tuning llama 3 with a pdf
@RitheshSreenivasan
@RitheshSreenivasan 24 күн бұрын
Ok let me see
@tonyanudeep8268
@tonyanudeep8268 Ай бұрын
from langchain import PromptTemplate, LLMChain template = """ Write a concise summary of the following text delimited by triple backquotes. Return your response in bullet points which covers the key points of the text. ```{text}``` BULLET POINT SUMMARY: """ prompt = PromptTemplate(template=template, input_variables=["text"]) llm_chain = LLMChain(prompt=prompt, llm=llm) while executing this am getting ImportError: cannot import name 'create_model' from 'langchain_core.runnables.utils' (/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/utils.py)... please help me in resolving this issue
@aakashmittal8598
@aakashmittal8598 Ай бұрын
Can we use this for screen inages also?
@RitheshSreenivasan
@RitheshSreenivasan Ай бұрын
Do screen capture and send images to GPT4o or GPT4V
@ReviewerCame
@ReviewerCame Ай бұрын
Wats the costing on api
@patagonia4kvideodrone91
@patagonia4kvideodrone91 Ай бұрын
KZfaq's automatic translation is very very bad, perhaps your pronunciation too, although I understood more by listening than by reading the ramblings that it generated as a translation. I encourage you to practice your pronunciation, or to translate it with whisper, so that you can see for yourself if it makes sense or not, things like "transformers of the porn border" came out. so imagine.
@RitheshSreenivasan
@RitheshSreenivasan Ай бұрын
Can’t help if KZfaq’s automatic translation is broken. If you can’t understand my pronunciation don’t watch the videos
@sandipchavan2298
@sandipchavan2298 Ай бұрын
Best
@sanjaysan9249
@sanjaysan9249 Ай бұрын
I have one query about language support. Is keybert support Kannada Language? sir
@RitheshSreenivasan
@RitheshSreenivasan Ай бұрын
I don’t think so
@impushprajyadav
@impushprajyadav Ай бұрын
Nice explanation
@RitheshSreenivasan
@RitheshSreenivasan Ай бұрын
Thank You
@VenkatesanVenkat-fd4hg
@VenkatesanVenkat-fd4hg Ай бұрын
Great explanation, I won't ask AI will replace jobs but I like you to discuss the future trends to upskill to the indian AI market. I believe most of the startups will use this API and call them as AI company....
@RitheshSreenivasan
@RitheshSreenivasan Ай бұрын
Thank You!
@codingandgamesforfun2234
@codingandgamesforfun2234 Ай бұрын
Hey Ritesh, is it even possible to fine tune this model, since nothing is mentioned about it?
@RitheshSreenivasan
@RitheshSreenivasan Ай бұрын
Not possible
@DEEKSHASRIVASTAVA-wf6fg
@DEEKSHASRIVASTAVA-wf6fg Ай бұрын
sir plz teach something for llama and langchain
@RitheshSreenivasan
@RitheshSreenivasan Ай бұрын
Have you looked at my other videos? I have a lot of videos on llama and langchain
@vegabondd
@vegabondd Ай бұрын
​@@RitheshSreenivasanyes sir dekhe h
@dumbol8126
@dumbol8126 Ай бұрын
is it faster than faster-whisper?
@RitheshSreenivasan
@RitheshSreenivasan Ай бұрын
You have to check out. Did this video a long time back so unsure
@VenkatesanVenkat-fd4hg
@VenkatesanVenkat-fd4hg Ай бұрын
Waiting fr your videos.....past days...I hav one doubt, if these big companies like openai & google gemini provide APIs that handles with maximum features like vision & audio . How will research occurs in India AI bussiness, mostly startups will consume this APIs. I think no need of data scientists and data science fields or dept anymore here. What is your thoughts on this? By Senior Data Scientist with PhD....
@RitheshSreenivasan
@RitheshSreenivasan Ай бұрын
All these API's will work for 80% of cases. You still need to tweak the code for better results. So finetuning may be required . All these APIs will not work for all the use cases
@VenkatesanVenkat-fd4hg
@VenkatesanVenkat-fd4hg Ай бұрын
@@RitheshSreenivasan Yes, you are correct sir. I think, this accuracy along with proper prompt strategies startups will handle to get funding using recent graduates itself...deep dive not required is my perception....
@dr.mikeybee
@dr.mikeybee Ай бұрын
Where is it? Fantasyland?
@RitheshSreenivasan
@RitheshSreenivasan Ай бұрын
Yes Fanatsyland as of now. But will land in Google products later once they have a polished product
@aydinahmadli7005
@aydinahmadli7005 Ай бұрын
can we do this via API?
@fouziaanjums6475
@fouziaanjums6475 Ай бұрын
@Rithesh Sreenivasan Sir could you please make a video on recently launched Idefics2 by Hugging Face, also fine tuning it for custom dataset, i would be really glad 😊
@RitheshSreenivasan
@RitheshSreenivasan Ай бұрын
Let me look into it
@youme101ptg
@youme101ptg Ай бұрын
can you share your google collab ?
@RitheshSreenivasan
@RitheshSreenivasan Ай бұрын
I dont have the collab now. But code is simple .Look at rembg github
@danielagaio8090
@danielagaio8090 2 ай бұрын
why would you just read what s already there? without any practical example? people can read
@RitheshSreenivasan
@RitheshSreenivasan 2 ай бұрын
Why did you watch the video? Not everyone reads
@ReviewerCame
@ReviewerCame Ай бұрын
True ... So do retesh have an example scenario of actual cost savings ???
@LezZeppelinFanPage-nm1ly
@LezZeppelinFanPage-nm1ly 2 ай бұрын
Can you make a step by step. I really want to learn but do not know where you are installing. Is it in Command Prompt?
@RitheshSreenivasan
@RitheshSreenivasan 2 ай бұрын
It is in a colab notebook
@parthrangarajan3241
@parthrangarajan3241 2 ай бұрын
Hi, great video! Is it possible to map each topic to their respective documents?
@user-vv1up2vo4f
@user-vv1up2vo4f 2 ай бұрын
Sir any code for how to fine tune this model??? pls reply?? im trying to do it with my data,
@RitheshSreenivasan
@RitheshSreenivasan 2 ай бұрын
No code to fine tune this model.It is outdated. You can try with some LLM
@codingandgamesforfun2234
@codingandgamesforfun2234 Ай бұрын
Are you able to finetune it?
@user-vv1up2vo4f
@user-vv1up2vo4f Ай бұрын
@@codingandgamesforfun2234 No, bruh I have worked on that. It's long process by LLM We have to do data cutting edge by edge to get the best possible accuracy. They didn't provided any architecture ,doc or data for the Med7.
@fredsakay994
@fredsakay994 2 ай бұрын
Could you please make ready-to-use web-page with this? Not every person is a prorgammer. I tried to build it in Colab and was unsuccessful. Please create this META translator for people as web-page service for online use
@RitheshSreenivasan
@RitheshSreenivasan 2 ай бұрын
I am not that knowledgable on creating web applications. That is why I try to create the back end
@fredsakay994
@fredsakay994 2 ай бұрын
@@RitheshSreenivasan and how it is going?
@rnronie38
@rnronie38 2 ай бұрын
Sir, Is this done on paid colab? How can I do this in unpaid colab with cpu? Is it even possible?
@RitheshSreenivasan
@RitheshSreenivasan 2 ай бұрын
It should be possible if you use a quantized model. There are other libraries like ollama where you can run it locally on CPU
@charitasri8162
@charitasri8162 2 ай бұрын
sir, can i get this med-palm2 model for free? so i can use it in my project which is related to the elder people health. and sir i have a few doubts regarding LLM and stuff..how can i contact you? please sir
@RitheshSreenivasan
@RitheshSreenivasan 2 ай бұрын
Book a session on Topmate. Link in channel description
@sunshine-seema6095
@sunshine-seema6095 2 ай бұрын
Can we run this in colab free version T4 GPU?
@RitheshSreenivasan
@RitheshSreenivasan 2 ай бұрын
No unless it is a quantised version
@dr_harrington
@dr_harrington 2 ай бұрын
chroma? No, you are using the default which is just in-memory
@RitheshSreenivasan
@RitheshSreenivasan 2 ай бұрын
Yes I am using default . It was a mistake
@vishnusureshperumbavoor
@vishnusureshperumbavoor 2 ай бұрын
Thanks mahn this is great
@RitheshSreenivasan
@RitheshSreenivasan 2 ай бұрын
Thank You
@ROKKor-hs8tg
@ROKKor-hs8tg 2 ай бұрын
Is a key gpt4 required, or can it be 3.5
@RitheshSreenivasan
@RitheshSreenivasan 2 ай бұрын
GPT-4 key is required
@vishnusureshperumbavoor
@vishnusureshperumbavoor 2 ай бұрын
How to get access to this huggingface instruct model?? Can u provide me the link??
@RitheshSreenivasan
@RitheshSreenivasan 2 ай бұрын
Google it you will get it. Link is also present in description
@malik_fa
@malik_fa 2 ай бұрын
Great video. I would request you please add a road map for learning modern NLP as well. Bcz of a lot of advancements in this field. Thanks
@RitheshSreenivasan
@RitheshSreenivasan 2 ай бұрын
I have a 2023 video as well
@dennisrose40
@dennisrose40 2 ай бұрын
Wonderful video, Rithesh. The RAG Architecture Model graphic and your explanations with names of the major parts you used were very instructive.
@RitheshSreenivasan
@RitheshSreenivasan 2 ай бұрын
Thank You!