13 Liquid Clustering in Databricks
7:56
7 сағат бұрын
3 Skewness in Databricks
14:51
21 күн бұрын
Website Walkthorugh!
2:52
Ай бұрын
Announcement!!
1:56
Ай бұрын
Пікірлер
@cloudfitness
@cloudfitness 3 күн бұрын
Full course in databricks optimization available at bhawnabedi.graphy.com/courses/Databricks-Performance-Tuning--Optimizations-669c31ee460ad37e50a81b08
@anantpa
@anantpa 3 күн бұрын
Hi @Bhawana, Your videos are great and keep posting! I have a question, if OPTIMIZATION is automatic for LC, then why we need to trigger OPTIMIZE command
@gudiatoka
@gudiatoka 3 күн бұрын
Insightful ❤
@cloudfitness
@cloudfitness 3 күн бұрын
Complete course for DLT is available at bhawnabedi.graphy.com/s/store/
@karthikrajanatarajan7057
@karthikrajanatarajan7057 3 күн бұрын
Most Waited One👏
@user-my3uz6uu6m
@user-my3uz6uu6m 4 күн бұрын
can we create a non delta table out of that csv file that you saved in volume? can we discuss on this?
@deepakpatil5059
@deepakpatil5059 8 күн бұрын
command "databricks bundle init" is getting failed with exit code 1. I have installed databricks cli version 0.223.0 and authentication has been done properly. Can you please suggest what could be the issue?
@dnyandeokothawale
@dnyandeokothawale 10 күн бұрын
could you plz upload full video for hive to uc migration using ucx like group migration, table migration
@grim_rreaperr
@grim_rreaperr 10 күн бұрын
at 11:38 you mention that both aes_encrypt and aes_decrypt functions need to be applied to binary format columns, but when using aes_encrypt you supplied passport_number column which is string format. Can you please clarify?
@Vishal-q8e
@Vishal-q8e 10 күн бұрын
How to encrypt boolean or integer columns and get encrypted value in same data type
@tarun4494
@tarun4494 11 күн бұрын
Very good and clean understanding, i just saw your video after my training
@sunilgurram9670
@sunilgurram9670 11 күн бұрын
Can we store the output of dbt run operation command into a shell script variable?
@mogilikanala9725
@mogilikanala9725 11 күн бұрын
I am getting below error ,Any help would be appreciated. 250001 (n/a): Could not connect to Snowflake backend after 2 attempt(s).Aborting If the error message is unclear, enable logging using -o log_level=DEBUG and see the log to find out the cause. Contact support for further help. Goodbye!
@madhusudhanaerappareddy856
@madhusudhanaerappareddy856 14 күн бұрын
Good to start with small videos in beginning to understand the fundamentals, and this is one such video. Loved it!
@sarvannakolnar5499
@sarvannakolnar5499 14 күн бұрын
What if I have a web application and using an event grid or event hub I want to store data into redis
@saikrishnarathod3176
@saikrishnarathod3176 14 күн бұрын
Thank you for taking time and making these videos. Greatly appreciated.!
@GirishTharwani1992
@GirishTharwani1992 15 күн бұрын
Great information !!
@vallepumadhu3650
@vallepumadhu3650 16 күн бұрын
i am got this error while creating that store table Mam AnalysisException: [UNRESOLVED_ROUTINE] Cannot resolve function `PARSE_JSON` on search path [`system`.`builtin`, `system`.`session`, `spark_catalog`.`default`].; line 2 pos 7
@cloudfitness
@cloudfitness 15 күн бұрын
I have mentioned this in video, you need use dbr above 15.3
@davidcook6618
@davidcook6618 16 күн бұрын
useless video. u just reading out whats written there in that page which you took from internet.
@FTI2023
@FTI2023 17 күн бұрын
Please update this at slide 18:29 part, currently there are 4 type triggers: Custom/Tumbling/Schedule/Sstorage Event
@someshkhatawe3404
@someshkhatawe3404 18 күн бұрын
Nice video . Good to see latest content uploaded so quickly. I was doing a poc using variant to filter table based on nested json field and must say its pretty good. Does has some learning curve and need more functions for it as once you convert it to variant you cannot use any string or any other datatype functions unless you cast it.
@gudiatoka
@gudiatoka 18 күн бұрын
After spark 3.0 AQE already enabled...so skewness can easily handled...
@KavithaPari
@KavithaPari 18 күн бұрын
Thank you bhawna🎉
@cbshankar
@cbshankar 18 күн бұрын
Can explain Spark interview questions and answers
@cloudfitness
@cloudfitness 18 күн бұрын
It's already covered in databricks interview questions playlist
@cbshankar
@cbshankar 18 күн бұрын
@@cloudfitness thanks you
@sreekanth6180
@sreekanth6180 19 күн бұрын
Hello Bhawna, you are doing a great job...! Very informative and much needed tool, now who are all trying to bring UC into their respective Databricks workspace. Wont UCX check the compatibility for Notebooks, i mean it would have been better if it pointed out necessary notebooks which are incompatible by looking for incompatible objects used in those respective Notebooks.
@sravankumar1767
@sravankumar1767 19 күн бұрын
Superb explanation Bhawana
@SurajitMetya
@SurajitMetya 21 күн бұрын
I ran databricks bundle init default-python error : $ Error: failed to compute file content for {{.project_name}}/databricks.yml.tmpl. template: :35:31: executing "" at <user_name>: error calling user_name: Endpoint not found for /2.0/preview/scim/v2/Me
@chandrashekhar-f7f
@chandrashekhar-f7f 21 күн бұрын
Lookup activity? What is dataset in adf ? Integration runtime and types ?
@cloudfitness
@cloudfitness 21 күн бұрын
All these are already covered in ADF interview series playlist !
@chandrashekhar-f7f
@chandrashekhar-f7f 21 күн бұрын
Thank you so much 👍, god bless you 🙏
@madhukarjaiswal0206
@madhukarjaiswal0206 22 күн бұрын
Its ok, but seems like its getting covered like a common theory session skipping some parts here and there. I feel it could have been better in case it would have been demonstrated practically. I don't see anything talked about the performance improvement each of these optimizations caused, neither was skewness or spill shown in spark UI.
@rupaghosh6251
@rupaghosh6251 24 күн бұрын
This channel should have lot more subscribers , the way she explains everything is so clear.I will recommend to all of my friends and colleague for whom it’s relevant
@gurumoorthysivakolunthu9878
@gurumoorthysivakolunthu9878 25 күн бұрын
Thank you, Mam... For sharing your knowledge... I have a question... Why there was no "adf_publish" branch -- when we created pull request from feature branch... ?? Does this mean to get "ARMTEMPLATE" - we must not work in feature branch... We can just directly work on Development branch and just "Publish" --> this will create "adf _publish" branch and give us ARMTEMPLATE... --> is this best practice for real time projects...?? .... Please clarify this... Thank you...
@eswarakhil7504
@eswarakhil7504 26 күн бұрын
Please make a playlist videos for SQL interview questions also. It is very helpful
@Rider-jn6zh
@Rider-jn6zh 27 күн бұрын
created fabric account by following your video but i dont see tenant setting in admin portal..please let me know how it will enabled
@ameliemedem1918
@ameliemedem1918 27 күн бұрын
Hi CloudFitness, I plan to use the snapshot approach to monitor the changes of a DLT table over time (daily). Can you please confirm that it is the snapshot type? I'm confused about the baseline table? Can I use the first version of the DLT table as the Baseline? Thanks a lot for your answer :)
@subhashyadav9262
@subhashyadav9262 27 күн бұрын
Great
@Rakesh-if2tx
@Rakesh-if2tx 27 күн бұрын
Thanks Bhawna!
@rupaghosh6251
@rupaghosh6251 27 күн бұрын
You are an amazing teacher..can you please create some videos on powerbi and DAX.Honestly this is one of the best content
@reachrishav
@reachrishav 28 күн бұрын
Will this work if the element types in the arrays are different? i.e. first array has string elements while the second array has struct or int elements
@user-nc8co6se4b
@user-nc8co6se4b 28 күн бұрын
Congratulation Bedi, and could you please continue to complete the ADB hindi tutorials ?
@ameliemedem1918
@ameliemedem1918 28 күн бұрын
Mind-blowing🤩🤩🤩!!! That's exactly what I needed! Thanks again :)
@ameliemedem1918
@ameliemedem1918 28 күн бұрын
Whaaooo! Thanks a lot CloudFitness! Amazing explanations 🤩
@cloudfitness
@cloudfitness 28 күн бұрын
@@ameliemedem1918