Really explained well Pls make some interview questions and answers series pls
@mallik_cmc3262 Жыл бұрын
content explanation is very nice .but one suggestion is that use proper naming conventions . that will give more understanding to new users.
@surajbasha90622 ай бұрын
Recent interview questions: 1. If you are using unity catelog in your project then can we use service principals to connect adf to batabricks? Sir can you please explain in depth.
@ravikumarkumashi7065 Жыл бұрын
Very crisp and clean thank you for this vedio
@gosmart_always Жыл бұрын
Thank you so much for your video. It was a much needed help.
@surajbasha90622 ай бұрын
2. Can we use yarn as cluster manager or resource manager in spark in databricks? In real time?
@bhromonindia Жыл бұрын
love it, very well explained.
@WafaStudies Жыл бұрын
Thank you 😊
@kanhashukla62653 ай бұрын
Thanks. This helped a ton
@bollywoodbadshah.7962 жыл бұрын
Thank a lot sir
@WafaStudies2 жыл бұрын
Welcome ☺️
@edwardr88264 ай бұрын
wafa ur the best
@shivambansal35602 ай бұрын
We do not need to mount after setting the spark config ?
@sgr8280 Жыл бұрын
connecting to data lake storage videos are confusing, why should we use service principal when we can access through Azure KeyVault directly?
@ramum4684 Жыл бұрын
I want to know to what is the benefit of using this service principle I'd ,name and value under oauth function while we can access files from blobstorage using just secret scope itself directly. Is there any advantage of using this ?
@gosmart_always9 ай бұрын
I followed your instructions. Still it is throwing error "Unsupported Azure scheme: abfss". May I know why and what the steps?
@ConCom6655 ай бұрын
What if the storage account has HNS disabled and I still want to use an SPN?
@amolpariharaus2 жыл бұрын
Please make video on polybase and jdbc approach.
@dineshdeshpande61972 жыл бұрын
Sir, how can we configure Azure data bricks Hive metastore to some external etl tool like informatica . The purpose is to fetch data from hive tables and use databricks engine for push down optimization to improve the performance of the data fetching.
@joestopansky63752 жыл бұрын
Thanks for the video; it is very informative. Using this method, do you need to execute the spark.conf.set() commands every time you restart the cluster? My guess is that you would since you are only affecting configs of this specific spark session.
@ravikumarkumashi7065 Жыл бұрын
Yes in real time these configurations will be part of your application code and once your cluster restarts,it kills your application due to driver unavailability and you need to start from beginning
@johnpaulprathipati1532 жыл бұрын
Hi sir i have been following every video in this db playlist. could you tell me how many more videos can be there to complete this DB playlist??
@bamidelejames3752 жыл бұрын
Hi, can you treat how to set up a shared external Hive metastore to be used across multiple Databricks workspaces, the purpose is to be able to reference dev workspace data in prod instance
@sewastudies68352 жыл бұрын
10 api access through adf in azure 6 success and 4 failed how to get only failed api?
@ishaangupta49412 жыл бұрын
Hi, One question . Dont you have to mount the file system again by using these Azure service principal's configuration ? I think you are able to read the Data coz your storage is already mounted by direct access keys ??
@ravikumarkumashi7065 Жыл бұрын
I think he is not using dbfs mount here and once your are authorized using service principal you can directly read from storage account but yes you can mount your adls to databricks file system once and it is set workspace level and then you can start reading from dbfs directly instead of adls
@alonsom.donayre1992 Жыл бұрын
@@ravikumarkumashi7065 Mounting is a deprecated pattern for storing and accessing data , it's not recommended anymore , using abfs driver is the best way right now! docs.databricks.com/external-data/azure-storage.html