Bringing a Streamlit App to Production
9:57
Shop Talk: 2024-08-12
57:08
14 күн бұрын
Chat with Your Own Data with Streamlit
31:29
Shop Talk: 2024-07-29
1:01:25
21 күн бұрын
Chat with Azure OpenAI in Streamlit
18:40
Forms and Filters in Streamlit
10:45
Shop Talk: 2024-07-15
1:02:43
Ай бұрын
An Introduction to Streamlit
18:01
Shop Talk: 2024-07-01
1:02:56
Ай бұрын
Tips for Choosing a Classifier
11:34
Shop Talk: 2024-06-17
1:03:53
2 ай бұрын
Naive Bayes
20:17
2 ай бұрын
Shop Talk: 2024-06-03
1:02:20
2 ай бұрын
Logistic Regression
17:31
2 ай бұрын
K-Nearest Neighbors
20:17
3 ай бұрын
Shop Talk: 2024-05-20
1:02:07
3 ай бұрын
Accuracy Is NOT Enough
21:48
3 ай бұрын
Gradient Boosting
13:46
3 ай бұрын
Shop Talk: 2024-05-06
1:01:04
3 ай бұрын
Random Forests
12:25
3 ай бұрын
A Primer on Classification
25:43
3 ай бұрын
Пікірлер
@binihalex8097
@binihalex8097 18 сағат бұрын
This is very educational. It helped me a lot to get basing understanding of GitHub Advanced Security features. Thank you so much!
@BadreddineSaadioui
@BadreddineSaadioui Күн бұрын
Nah man using vscode white theme is illegal
@XavierValBlasi
@XavierValBlasi 8 күн бұрын
Hello, one quick question: I have been able to connect my model to power bi without any issue, but now I want to update the data (changing one column, for example) so I can get new predictions. Can this be done outside of power query or I need to modify the data there directly?
@KevinFeasel
@KevinFeasel 8 күн бұрын
If the update involves adding more data to an existing dataset (e.g., more rows in a text file or more records in a SQL Server query result), then you'd just need to refresh the dataset. If you mean changing the specific input columns you want to use for your model call, then yes, you'd modify that in Power Query.
@roverteam4914
@roverteam4914 12 күн бұрын
Hi, I read through the Microsoft learn document, but I'm still confused between Azure ML designer and Azure ML studio...... I saw some comments saying Studio is the newer version of the designer, is this valid? The explanation says both are GUI for creating workflow...
@KevinFeasel
@KevinFeasel 12 күн бұрын
The confusion here is in naming. Yes, "the Azure Machine Learning Designer" is an old web UI for Azure Machine Learning. Azure Machine Learning Studio replaced this old Designer web UI in its entirety, and we only look at Azure Machine Learning Studio (ml.azure.com) in this series. But within Azure Machine Learning Studio is a component known as the designer (the topic of this video). It even does many of the things the old UI included, but is different code. Hopefully that helps clarify things a little bit here.
@sailing_life_
@sailing_life_ 15 күн бұрын
I have built an ML classification model in a Python Jupyter notebook. I was initially thinking I could export the model as a joblib file, and then write a python script to continously score my data as new data comes in. However, upon reading more, I found out that Python scripts only work with Personal Gateways not Enterprise Gateways. We need to run our report on an Enterprise Gateway, so the Python script approach won't work. My questions are: 1) If I deploy my model to Azure ML, can I connect to it in Power Query, and have new claims be scored after publishing to the Power BI Service on the Enterprise Gateway? 2) Do I need Fabric to do point #1 above? (My client doesnt yet have Fabric and probably wants to hold off on making that investment at the moment) 3) Do I need Power BI 'Premium' like the $20/user version or do I need the 'Premium Capacity' version?
@KevinFeasel
@KevinFeasel 8 күн бұрын
Prefacing this with "I'm not an expert on Enterprise Gateways and don't have a convenient way to test what I'm about to say," here are my answers. 1. You should be able to, yes. The Azure ML integration point reads data that has already been pulled out from the gateway, so from that perspective, it should not matter that the data initially came from on-premises. 2. No. The example I show in the video does not tie in with Fabric. Also, given the limitations that I showed in the video, this is definitely a "pre-Fabric" solution. 3. For this solution, you'd only need Power BI Professional for end users. Premium Per User (the $20 per user version) and explicit Premium capacity will also work but are not required.
@shakilquadri
@shakilquadri 16 күн бұрын
Thank you Feasel, you prepared good video to start with SQL Server installation for beginner.
@nguyenngoctrac4866
@nguyenngoctrac4866 17 күн бұрын
How can i download the script and demo database ?, Thanks
@KevinFeasel
@KevinFeasel 17 күн бұрын
I have a link in the show notes containing this information: 36chambers.wordpress.com/2024/02/20/video-transactional-replication-in-sql-server-on-linux/ I just updated it to include a copy of the BLS database.
@_PulpoPaul
@_PulpoPaul 18 күн бұрын
Great video. Thank you!
@chad_baldwin
@chad_baldwin 21 күн бұрын
I don't know why after all these years I'm JUST learning you have a KZfaq channel. Immediate subscribe, especially with all the work you do promoting blogs (including my own).
@pmo4325
@pmo4325 25 күн бұрын
Oh no it seems like the setup steps have changed in Automated ML :(
@KevinFeasel
@KevinFeasel 24 күн бұрын
It does look like they've shuffled around the UI a bit, though in a quick review, the components do look to be the same (mostly) when you look across all of the dialog options for the old UI and the new one. There's an additional option in Compute for serverless compute instead of using an existing compute instance or cluster, but that appeared to be the biggest single change.
@pmo4325
@pmo4325 24 күн бұрын
@@KevinFeasel I see. I'm a Power BI developer but a complete novice that is tinkering with this at work. I would really love a video showing the new UI features, and also a newbies guide for training a model in Automated ML and then deploying it FTR your channel is very helpful so far, by far the most comprehensive that I've come across on Azure ML Studio
@KevinFeasel
@KevinFeasel 22 күн бұрын
That sounds like a good idea. I'll add it to my backlog.
@user-tb3oi7vg4j
@user-tb3oi7vg4j 25 күн бұрын
Thanks, Feasel, for enabling me to install it, through this video.
@SiLVERiCE8707
@SiLVERiCE8707 27 күн бұрын
It works like a charm! Thank you for this!!!!
@ManoLeitonez
@ManoLeitonez 28 күн бұрын
Thank you, Kevin. The video is great. Unfortunately, I'm noticing through stress tests that SQL Server on Ubuntu is 11x less performant than on Windows (with very similar resources: 4vCore, 16 GB RAM). Is there any configuration to make SQL Server perform better on Linux?
@KevinFeasel
@KevinFeasel 28 күн бұрын
I do cover this in the next video in the series: kzfaq.info/get/bejne/eeCegKag2rnXfWg.html. In it, I cover most of the document Microsoft has put together on optimizing a Linux installation for SQL Server: learn.microsoft.com/en-us/sql/linux/sql-server-linux-performance-best-practices?view=sql-server-ver16 I will also say that, in talking to other people who have performed similar stress testing, I've heard numbers ranging anywhere from "approximately the same" to about 10% less performant on Linux. The biggest gap post-tuning was from someone using some powerful software (on-prem, double-to-triple-digit cores, direct-attached NVMe storage, TB+ of RAM, etc.), so that does come in as a factor.
@peteralmanzar
@peteralmanzar Ай бұрын
Does azure have components to one hot encode Or scale features?
@KevinFeasel
@KevinFeasel Ай бұрын
The scaling component in Azure ML Designer is called Normalize Data, so there is a built-in scaling capability. I believe there is not a built-in one-hot encoding capability. To do that, I'd recommend adding an R or Python component and have it do the work.
@nirajbaral2415
@nirajbaral2415 Ай бұрын
Hey do you have documentation for the replication (ur documentation not from Microsoft) .. that will be helpful thank you 🙏
@KevinFeasel
@KevinFeasel Ай бұрын
I've never written anything on replication other than this video. The notes for the video (including code) are at 36chambers.wordpress.com/2024/02/20/video-transactional-replication-in-sql-server-on-linux/ but that's about it.
@pmo4325
@pmo4325 Ай бұрын
How absurd that MS can't just make this a simple one click solution!
@samuelarias820
@samuelarias820 Ай бұрын
I currently have Ubuntu 24.04, does that mean I still won't be able to install SQL server?
@KevinFeasel
@KevinFeasel Ай бұрын
@@samuelarias820 Correct. There might be a way to finagle installation of libraries SQL Server is expecting but they only officially support up to 22.04 at this time. They'll support 24.04 eventually but it will probably be sometime in 2025, based on when support for 22.04 came out.
@samuelarias820
@samuelarias820 Ай бұрын
@@KevinFeasel Thank you very much for the information, we are attentive in case new support is released.
@statisticsdomain5104
@statisticsdomain5104 Ай бұрын
Thanks big boss
@IsabellGurstein
@IsabellGurstein Ай бұрын
Ty very much for your videos. I have a question: Is the Pipeline Tool capable of saving the dataset results in csv. format instead of parquet?
@KevinFeasel
@KevinFeasel Ай бұрын
I don't believe so. Parquet is typically a much more compact format than CSV and includes headers and additional metadata, so Microsoft uses Parquet format. You could convert the Parquet file to CSV. For example, if you have Python available, it's pretty easy as long as you have Pandas and PyArrow installed (pip install pandas pyarrow). You'd read the Parquet file with something like df = pd.read_parquet('file location') and then write it to CSV with df.write_csv('new file location').
@IsabellGurstein
@IsabellGurstein Ай бұрын
@@KevinFeasel Hey, ty for your quick answer. Yeah, I thought of this, too, but was wondering, weather I have missed something. I have another question: Do Batch EPs always have to register the input data in Azure ML? Is there any chance to avoid that?
@KevinFeasel
@KevinFeasel Ай бұрын
@@IsabellGurstein Technically, no. You can use a local file path (assuming you're running the job locally), a public HTTPS path, or a WASBS path, based on learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-inputs-outputs-pipeline?view=azureml-api-2&tabs=cli. It's considerably easier (particularly when it comes to security) to register a datastore location and move files in and out of that location for processing, so that's typically what I do.
@fatimahehab7986
@fatimahehab7986 Ай бұрын
Worked on Linux Mint 21.3 Thanks!
@itspersie
@itspersie Ай бұрын
On ubuntu 24.04 I think it'll not work after the config command, I was getting this /opt/mssql/bin/sqlservr: error while loading shared libraries: liblber-2.5.so.0: cannot open shared object file: No such file or directory
@KevinFeasel
@KevinFeasel Ай бұрын
Yeah, it's likely that this will not work on Ububtu 24.04 as is. 24.04 isn't officially supported by Microsoft yet, as it just came out recently. I wouldn't expect official support for another year or thereabout. There may be a way to force installation but I wouldn't recommend it when you can run SQL Server on Linux in a Docker container or 22.04 VM as an alternative.
@tshokelo1219
@tshokelo1219 8 күн бұрын
​@@KevinFeasel i get the same error on 22.04 opt/mssql/bin/sqlservr: error while loading shared libraries: liblber-2.4.so.2: cannot open shared object file: No such file or directory
@KevinFeasel
@KevinFeasel 5 күн бұрын
@@tshokelo1219 The error is different because that's a different version: 2.4 versus 2.5. Make sure you're using the Ubuntu 22.04 repo for Microsoft and that you've deleted any prior repos that might contain SQL Server. That's asking for the older version of OpenLDAP. Check for existing Microsoft repos in /etc/apt/sources.list.d/ and /etc/apt/sources.list.d/additional.repositories.list, remove them, and then add the repo specifically for Ubuntu 22.04. That should resolve this installation issue.
@ArmandoPineda4
@ArmandoPineda4 Ай бұрын
¡Muchas gracias!
@arthurcarvalho7382
@arthurcarvalho7382 Ай бұрын
If running "sudo apt-get install -y mssql-tools18 unixodbc-dev" you get an error such as: ""Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?" just run "sudo killall apt apt-get" and rerun the command
@EmrynHofmann
@EmrynHofmann Ай бұрын
Hi, great video! - loving the series :) Any chance you've got another resource on the Azure DevOps side of things? It seems microsoft has shut down their workshops :(
@KevinFeasel
@KevinFeasel Ай бұрын
The first thing that comes to mind is a Microsoft Cloud Workshop on continuous delivery. After Microsoft shut down that program, some of them are now hosted by Solliance: github.com/solliancenet/microsoft-mcw-continuous-delivery/tree/main I will note that I didn't work on that one, so I'm not sure how well it fits what you're looking for. Microsoft does have the Azure DevOps Labs: www.azuredevopslabs.com/ They also have a good number of MS Learn modules on ADO: learn.microsoft.com/en-us/training/browse/?products=azure-devops
@sahithpoojary5271
@sahithpoojary5271 Ай бұрын
what is the password for the login...?
@KevinFeasel
@KevinFeasel Ай бұрын
It's whatever you set the password to be. There is no default password. At 6'05", you can see that I set the password to the phrase "WeNeedABetterPassword!!!1" but you'd probably want to use a better password than that.
@KevinFeasel
@KevinFeasel Ай бұрын
And if you've forgotten the password, you can reset it if you have root access to the machine: www.sqlservercentral.com/blogs/reset-sa-password-on-sql-server-on-linux
@emrynhofmann5757
@emrynhofmann5757 Ай бұрын
Great video! - Just wondering why you chose to use YAML files over the python SDK's CommandComponent()? - Is one better than another, or is it more of a personal preference?
@KevinFeasel
@KevinFeasel Ай бұрын
Entirely personal preference. They'll both work the same way, so the only reason I prefer the YAML version at all is because I can also use similar files with the az cli and run R or .NET code to perform similar deployments without changing quite as much.
@Zikato25
@Zikato25 Ай бұрын
Great video, Kevin! Very straight to the point.
@zuleideferreira4424
@zuleideferreira4424 Ай бұрын
Thanks a lot for this video. It was exactly what I am looking for.
@erkany6753
@erkany6753 2 ай бұрын
This is a great overview. Thanks 🎉
@ivancasanagallen
@ivancasanagallen 2 ай бұрын
Great stuff, it is handy indeed! I was expecting to see a batch endpoint at the end of this process but there isn't any, if I am correct.Thanks!
@KevinFeasel
@KevinFeasel Ай бұрын
The batch endpoint won't show up in the Endpoints menu, as those are for code-first endpoints. But at 8'45" in the video, you can see the "Publish" button that lets you create a new pipeline endpoint. If you go through that process, you'd be able to see the newly created endpoint on the Pipelines page. I didn't do that in this video, though, so that's why the only asset we have is the pipeline in the Designer tab.
@ivancasanagallen
@ivancasanagallen 2 ай бұрын
Excellent work, Kevin! You make complex concepts easy to digest. I was wondering if you could share how to deploy a model generated with Auto ML. A Voting Ensemble, for example. My guess is that Voting Ensemble is going to be the best shot for many users, at least initially. That´s why I am thinking it would be interesting to see how to produce a batch endpoint. Thanks!
@KevinFeasel
@KevinFeasel 2 ай бұрын
AutoML model deployment is something I can add to my backlog, sure. It may be a bit before I'm able to get the video out, so I'd recommend checking out the docs on this in the meantime: learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-automl-endpoint?view=azureml-api-2&tabs=Studio for deploying an AutoML endpoint (online, not batch) and learn.microsoft.com/en-us/azure/machine-learning/how-to-use-batch-model-deployments?view=azureml-api-2 for batch deployment.
@hinaque4505
@hinaque4505 2 ай бұрын
Thank you for these amazing videos, can you please help me understand what is postman? and is it part of the Azure studio, Thanks A LOT!
@KevinFeasel
@KevinFeasel 2 ай бұрын
Postman (www.postman.com/) is not part of any Azure or Microsoft product. It is a tool you can use to test REST APIs. I use it in this video (and in quite a few other places) because it provides me an easy way of making an API call, seeing the response, and even building tests around how I expect the response to look.
@hinaque4505
@hinaque4505 2 ай бұрын
@@KevinFeasel Thank you so much for your prompt reply, your videos are amazing!!!
@user-ng4pp7fp6t
@user-ng4pp7fp6t 2 ай бұрын
Hi Kevin, can you pls record videos with a higher volume. Not able to hear. Good job though on the content. Thanks
@KevinFeasel
@KevinFeasel 2 ай бұрын
I think I can do that for future videos. I know I typically have the gain turned way down to minimize outside noises but I should be able to do something for the videos.
@muhammadtalmeez3276
@muhammadtalmeez3276 2 ай бұрын
Can we integrate LLM models in powerBi dashboard to ask question from data? If yes how?
@KevinFeasel
@KevinFeasel 2 ай бұрын
As of today, the answer is kind of complicated. You can use Azure OpenAI within Power Query to enrich data: techcommunity.microsoft.com/t5/educator-developer-blog/how-to-use-azure-open-ai-to-enhance-your-data-analysis-in-power/ba-p/4041036 And there is Copilot for Power BI within Microsoft Fabric but it requires Microsoft Fabric and at least an F64 SKU: learn.microsoft.com/en-us/power-bi/create-reports/copilot-introduction There is at least one third-party custom component that might do what you want in AI Lens for Power BI, though there will likely be additional costs for licensing it: www.lensvisual.io/
@muhammadtalmeez3276
@muhammadtalmeez3276 2 ай бұрын
@@KevinFeasel thanks kevin for detail answer, but instead of openai or copilot i want to use other open source model from huggingface. Is it feasible?
@KevinFeasel
@KevinFeasel 2 ай бұрын
@@muhammadtalmeez3276 Probably not without a significant amount of development work on your end. Just spitballing an answer, you'd probably need to host your model via API and then create a custom component to send the user prompt plus data to that API and get the response back. It's the "plus data" that would make this a real challenge, especially if you intended to use it with sliders and to view data on charts. It's not something I've done before, though I suppose it is technically possible, just a major endeavor.
@muhammadtalmeez3276
@muhammadtalmeez3276 2 ай бұрын
@@KevinFeasel thank you so much kevin. I am struggling this from last 3 days. Your answer is very helpful to conclude my research.
@badbad_
@badbad_ 2 ай бұрын
10:29 am I crazy or did you forgot to show the workflow results?
@KevinFeasel
@KevinFeasel 2 ай бұрын
I did not come back to that GitHub Action execution or the workflow log, but I did show the results of the scan: that's the vulnerability alert that we see in the "Code scanning" menu item around 11:53. The workflow log itself has quite a few diagnostic entries, but isn't really that interesting on its own, unless you're troubleshooting a problem with CodeQL scans.
@jhonayebulgin9431
@jhonayebulgin9431 2 ай бұрын
A word of encouragement for anyone crossing this video. You will make it don't give up, fear not for there are brighter days are ahead for the race is not for the swift but for they that can endure. The Lord is with you trust in him and he will see you through and work it out in Jesus mighty name. I pray this word may bless someone God bless you☺ I just felt a pull to comment on this video and I believe that it is indeed for someone. keep fighting don't give up Psalms 30:5 Weeping may last for a night but Joy comes in the morning.
@saisrirajnallam
@saisrirajnallam 2 ай бұрын
Hii thank for your videos.. i learned a lot from content you post.. As you are a certified professional in azure. Please make a video on necessary and important ETL operations that necessary for even data scientist to learn . What basic data engineering related operations that are usefull for the data scientist or ml professionals to perform or might get used ..?
@KevinFeasel
@KevinFeasel 2 ай бұрын
Thanks for the kind words. I do have some of the Azure certifications (data scientist, AI engineer, data engineer, admin, Power BI data analyst, database administrator). I do have on my backlog the idea of doing a series on data engineering. It's a huge topic, so I haven't quite figured out yet how I want to tackle it. But it is in the plan.
@tianhaoluo6782
@tianhaoluo6782 2 ай бұрын
In order to submit the job at 20:14, do we need to set up Azure CLI and/or SDK ahead of time? On my end it looks like it is using my local version of Python. Thank you!
@KevinFeasel
@KevinFeasel 2 ай бұрын
You should have the az cli installed beforehand, yes. I have the pre-requisites in the readme file for the demo code: github.com/feaselkl/Beyond-the-Basics-with-AzureML When it does run, it will use your local Python installation, yes. But that will then make a call to the Azure ML API and request executing code on a compute instance there, so you're not doing actual model training locally, just orchestrating it locally.
@tianhaoluo6782
@tianhaoluo6782 2 ай бұрын
Thanks for the wonderful series! One thing I don't quite understand: why do we need the Enter Data Manually and Edit Metadata parts? Wouldn't it suffice to use Web Service Input only since it is an inference job? Does it basically 'tell' the model that now we do not have the labels anymore?
@KevinFeasel
@KevinFeasel 2 ай бұрын
You're exactly correct. If we stop right after adding the Web Service Input, the model we deploy will work fine, but our web service will expect exactly the same input columns as our initial CSV file. That includes the PaymentIsOutstanding label column, which would be silly to include in production. For that reason, we remove the ChicagoParkingTickets file input and substitute it with a manual version that excludes PaymentIsOutstanding from our expected inputs.
@rickpack6700
@rickpack6700 2 ай бұрын
Taking DP100 today and found your video after scoring the lowest on “Deploy and retrain a model”. Thank you!
@KevinFeasel
@KevinFeasel 2 ай бұрын
Good luck on the exam, Rick.
@rickpack6700
@rickpack6700 2 ай бұрын
I passed! These videos helped. Thank you for putting so much effort into these, Kevin.
@113because
@113because 2 ай бұрын
Thạnk your video. But, i did yourstep suggestion , but I got error "azureml.studio.common.error.ColumnNotFoundError: Column with name or index "Label" does not exist in "Dataset", but exists in "Transformation". Please to help me?
@KevinFeasel
@KevinFeasel 2 ай бұрын
If you're following along with the videos in this series, we don't have a Label column at all. For that reason, I might not be able to tell you for sure why you're getting that error. My best guess, given current information, is that perhaps you have something in the transformations on the training side, renaming the PaymentIsOutstanding column as Label. If you did that, then your inference would expect the PaymentIsOutstanding column because it would need something to rename as Label.
@113because
@113because 2 ай бұрын
Thanks yours responding, Label is a column in my data which used to make a pipeline. First i used label is my target for training, after that i want to deploy a endpoint model, which can inference with new data (without label/target), but I fail. Do you have a suggestion?
@KevinFeasel
@KevinFeasel 2 ай бұрын
@@113because Given current information, as I noted above, you might be using the Label column in one of your transformations (column renames, data type changes, cleaning missing data, etc.--basically, everything other than loading data, training the model, scoring the model, and evaluating the model). If so, then you'd have to remove Label from those transformations in the training section; otherwise, scoring will require Label in the Apply Transformation step.
@Cor-tex
@Cor-tex 3 ай бұрын
At which point do you choose the GPU you want to use to train a model? Meaning, where is the list of GPUs I could choose from?
@KevinFeasel
@KevinFeasel 3 ай бұрын
That happens when you choose the Azure ML compute instance, which is at 6:35 in the video. That works because I've already set up the compute instance in a prior video in the series. To set one up, go to ml.azure.com and connect to your Azure ML workspace. Then, navigate to Compute and make sure the Compute instances tab is selected. Select the New button and you get a menu of available CPU and GPU options. As a quick note, trial accounts and free credits (like from Visual Studio subscriptions) won't have access to any GPU options, but you can at least see the GPUs that are available and if you are using a paid account, you can request quota for the setup you want.
@miguelangelvelarde
@miguelangelvelarde 3 ай бұрын
I tried several times to install MSSQL on Ubuntu 22.04 but always have the same error: /opt/mssql/bin/sqlservr: error while loading shared libraries: liblber-2.4.so.2: cannot open shared object file: No such file or directory 😖
@KevinFeasel
@KevinFeasel 3 ай бұрын
The most likely reason is that you're actually pointing to the Ubuntu 20.04 packages list rather than Ubuntu 22.04. The Focal release (Ubuntu 20.04) had as part of its requirements for SQL Server 2022 libldap-2.4-2 specifically, and Ubuntu 22.04 removed version 2.4 in favor of version 2.5. You can check to see if this is the case by navigating to /etc/apt/sources.list.d/ and then running the command: "less mssql-server-2022.list" Inside there, it'll include a link to Microsoft's package server. If that includes /ubuntu/20.04/ in the domain, that's your culprit. First, uninstall mssql-server: "sudo apt remove mssql-server -y" Next, remove the mssql-server-2022.list file: "sudo rm mssql-server-2022.list" Then, add the Ubuntu 22.04 package link: "curl -fsSL packages.microsoft.com/config/ubuntu/22.04/mssql-server-2022.list | sudo tee /etc/apt/sources.list.d/mssql-server-2022.list" Now it should install the Ubuntu 22.04 version of SQL Server and not 20.04.
@TheMissysman
@TheMissysman Ай бұрын
If it still doesn't work look into etc/apt/sources.list.d/additional.repositories.list and delete everything in there before installing MSSQL for 22.04.. That fixed my problem.
@realzeti
@realzeti 3 ай бұрын
Kevin! I really appreciate your effort in making this valuable material. I'm very surprised that you only have 2k subscribers, for such a detailed, clear and relevant information. Please don't be discouraged, if such is the case
@HusseinSaeed-ex7pm
@HusseinSaeed-ex7pm 3 ай бұрын
Hey Kevine, great vide, i'm trying to rid of windows so i'm using SQL for linux i'm running SQL on docker using ubuntu , is there any way to implement merge replication, i'm using SSMS 18 , i'm already implement it on windows and it is working
@KevinFeasel
@KevinFeasel 3 ай бұрын
Merge replication is not supported on Linux, no. Merge replication tends to be something of a laggard for most features. You'll see transactional and snapshot replication support in a variety of features but merge and peer to peer are much less likely to be supported when new things come out, including SQL Server on Linux. learn.microsoft.com/en-us/sql/linux/sql-server-linux-replication?view=sql-server-ver16#supported-features
@HusseinSaeed-ex7pm
@HusseinSaeed-ex7pm 3 ай бұрын
@@KevinFeasel so can i set Bidirectional Transactional Replication on Linux , I want the subscriber pull the data from the publisher and in some times subscriber can make some changes , i want to move out of windows to use linux and docker container
@KevinFeasel
@KevinFeasel 3 ай бұрын
@@HusseinSaeed-ex7pm This is where the documentation gets a little tricky. There's nothing saying you *cannot* use bidirectional transactional replication, but there's also nothing explicitly saying you *can*. I haven't tried the scenario, to be honest, so I don't know whether that's possible. I'd recommend trying it out to see. If you made me give my prior probability on success, I'd say about 30% chance of success, as it's a more complicated replication scenario than normal transactional replication and unless there were Microsoft customers clamoring for support of that particular feature, it'd probably be low on their implementation list. But there is a chance bidirectional replication came "for free" with the normal transactional replication code--the only way to know for sure is to try it and see if it works.
@anuj7286
@anuj7286 3 ай бұрын
Thank you so much. Please make a video on how to back up the database from the command line.
@KevinFeasel
@KevinFeasel 3 ай бұрын
Thanks for the idea. I'll add it to my backlog.
@scottsilvasy7855
@scottsilvasy7855 3 ай бұрын
👊 Well done Kevin. Thanks - one of the better tutorials I have ever seen.
@mehdikhezrian2257
@mehdikhezrian2257 3 ай бұрын
Could you please not use the background music? it's very distracting :(
@alqnoa9890
@alqnoa9890 3 ай бұрын
And is this classifcation model?? What about the concept of best model generated ?
@KevinFeasel
@KevinFeasel 3 ай бұрын
If you're talking about the best model generated from my prior AutoML video, we're training a new model from scratch to see how to do it. If you're asking in general, you can do this in a couple of ways. One is to train separate models as different experiment runs, saving each in the Azure ML model registry and comparing model results--for classification, you might check measures like accuracy and F1 score. A second option would be to train separate models as runs in an experiment but tagged under the same model type, saving different versions of a model in the registry. Then, after comparing, you could delete the versions that don't perform as well. A third option would be to perform comparative model analysis as part of your initial training: you can incorporate hyperparameter sweeping and even use of different algorithms in the training steps and then save the best model of the bunch to the registry. I don't have an example of doing this in a video but Microsoft does have a code example of using a hyperparameter sweep: github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/1c_pipeline_with_hyperparameter_sweep
@alqnoa9890
@alqnoa9890 3 ай бұрын
Can you help me please? , Should we only upload the dataset without the Python code or what should we upload, please explain it to me, i have project !
@KevinFeasel
@KevinFeasel 3 ай бұрын
You upload the dataset. To generate the file dataset, grab the file from the video description and save it locally. Then, upload it into Azure ML as a v1 Tabular type, not a v2 MLTable. In the Data menu, make sure you are on the Data assets tab. Then, select +Create and name the data asset something like ChicagoParkingTickets. Select in the Type menu "Tabular" from the v1 section. On the second page, create your data asset From local files and that will give you a few more pages around where to store the data, the file you want to upload, and additional settings. Those steps should be pretty straightforward, as I tried to ensure that there would be no complications with this dataset. The Python code is something you submit via API call and we do that in the next video in the series.