AWS Glue: Write Parquet With Partitions to AWS S3

  Рет қаралды 15,786

DataEng Uncomplicated

DataEng Uncomplicated

Күн бұрын

This is a technical tutorial on how to write parquet files to AWS S3 with AWS Glue using partitions. This will include how to define our data in aws glue catalog on write
timestamps
00:00 Introduction
00:30 Remap Columns in dataframe
02:57 Write to Parquet - getSink Method
read csv in aws glue: • AWS Glue: Read CSV Fil...

Пікірлер: 22
@gabiru-danger
@gabiru-danger Жыл бұрын
Great content!
@companionprose4286
@companionprose4286 Жыл бұрын
Love this! FYI it might be a good idea if you're referencing a previous video to put a link in the description for us to easily find it.
@DataEngUncomplicated
@DataEngUncomplicated Жыл бұрын
Thanks! You are right. I will add it!
@BenOgorek
@BenOgorek Жыл бұрын
I followed the link 🥳
@JavierHernandez-xo5nb
@JavierHernandez-xo5nb 7 ай бұрын
Exellent video.... I wish that you make one of AWS Quicksight automatization....😊😊
@DataEngUncomplicated
@DataEngUncomplicated 7 ай бұрын
I've been working a bit with quicksights. What type of video content about quicksights would be helpful?
@jogeshrajiyan8313
@jogeshrajiyan8313 Жыл бұрын
Hi! I just wanted to know is creating database in glue catalog is a pre-requisite before converting to parquet file or it can be created automatically as you refered for the table in setCatalogInfo() function??
@jogeshrajiyan8313
@jogeshrajiyan8313 Жыл бұрын
As in the previous video I haven't seen you creating database 'customer' while sourcing the data from S3 directly to glue...
@DataEngUncomplicated
@DataEngUncomplicated Жыл бұрын
Hi Josh, yes,creating a database in the glue catalog (if not using the default database) is a pre-requisite if you want reference your data based on the data catalog. I created this database before making this video, I should have mentioned this. I don't think the method will write if the database doesn't exist but I could be wrong as I have not tested this.
@user-zw1zo1iz8z
@user-zw1zo1iz8z 5 ай бұрын
Thank you for the tutorial! Could I personalize the parquet partition name?
@DataEngUncomplicated
@DataEngUncomplicated 5 ай бұрын
you're welcome, well it is based on a column name so the partition should match the name of a column.
@joelluis4938
@joelluis4938 Жыл бұрын
Hi ! I've heard that you have the AWS Analytics Speciality Certification.. That's right? Could you please post one video with some advices or resources to prepare this exam or advices ? I found your chanel today and really liked it !
@DataEngUncomplicated
@DataEngUncomplicated Жыл бұрын
Hey Joel! Welcome to the channel! I am in fact AWS certified with the analytics certification. Sure I'll add it to my video backlog list...I have one video related to optimizing data in data lakes that is an exam question. Most of my content is related to working with data on aws.
@joelluis4938
@joelluis4938 Жыл бұрын
@@DataEngUncomplicated Do you have any video showing the entire workflow of an Analytics project on AWS from start to end? Collecting data from local to processing and maybe creating dashboard on aws or maybe with connections to other platforms like Power bi.. I'm not sure how it works in cloud the entire process
@udaynayak4788
@udaynayak4788 Жыл бұрын
can you please create a video wherein you read the data from redshift tables under aws glue pyspark(spark.sql)
@DataEngUncomplicated
@DataEngUncomplicated Жыл бұрын
Hi uday, sure, actually I'll make this my next video. They added some new AWS glue redshift capabilities where we can query the data with SQL from redshift into a dynamic dataframe
@udaynayak4788
@udaynayak4788 Жыл бұрын
@@DataEngUncomplicated eagerly waiting for your next video
@sanishthomas2858
@sanishthomas2858 6 ай бұрын
what is this Interface, how we have opened and installed this and connect from AWS, account. can u show something for beginners
@DataEngUncomplicated
@DataEngUncomplicated 4 ай бұрын
Hi, the interface I am using is just a jupyter notebook. You could spin up a jupyter notebook through the glue service directly using interactive notebooks
@asishb
@asishb 9 ай бұрын
Hi, how can I write the Transformed data into a Data Catalog table of AWS Glue, WITHOUT writing the data to S3 ? Please help !!
@DataEngUncomplicated
@DataEngUncomplicated 9 ай бұрын
Hi, I actually have the exact video you are looking for that doesn't use the glue catalog: kzfaq.info/get/bejne/pr6daNBqu9eWdJc.html hopefully this is helpful
@asishb
@asishb 9 ай бұрын
@@DataEngUncomplicated No. I want that instead of writing the data to S3, if I can write the data only to the Glue Data catalog (in your case, only "orders" table). Also, I tried the methods that you beautifully explained but 1) How can I save the file as "csv" ? I tried to set format as .setFormat("csv") , but the files are stored without the file extension in S3 2) Also, the table that is auto created using getSink() is blank. How to populate data ?
AWS Glue PySpark: Filter Data in a  DynamicFrame
7:21
DataEng Uncomplicated
Рет қаралды 8 М.
He sees meat everywhere 😄🥩
00:11
AngLova
Рет қаралды 11 МЛН
Nutella bro sis family Challenge 😋
00:31
Mr. Clabik
Рет қаралды 11 МЛН
Alat Seru Penolong untuk Mimpi Indah Bayi!
00:31
Let's GLOW! Indonesian
Рет қаралды 15 МЛН
Top AWS Services A Data Engineer Should Know
13:11
DataEng Uncomplicated
Рет қаралды 154 М.
Intro to Amazon EMR - Big Data Tutorial using Spark
22:02
jayzern
Рет қаралды 19 М.
What is Apache Parquet file?
8:02
Riz Ang
Рет қаралды 72 М.
Build with Me: Visualize Data using Amazon QuickSight | AWS Project
9:22
Amazon Athena to Query AWS S3 Data
11:43
Cloud Quick Labs
Рет қаралды 26 М.
Parquet File Format - Explained to a 5 Year Old!
11:28
Data Mozart
Рет қаралды 23 М.
Apache Iceberg on AWS with S3 and Athena [FULL COURSE IN 30MIN]
28:04
Johnny Chivers
Рет қаралды 19 М.
Practical Projects to Learn Data Engineering On AWS
8:04
DataEng Uncomplicated
Рет қаралды 43 М.
AWS Glue PySpark: Flatten Nested Schema (JSON)
7:51
DataEng Uncomplicated
Рет қаралды 13 М.
He sees meat everywhere 😄🥩
00:11
AngLova
Рет қаралды 11 МЛН