Apache Parquet vs Apache Avro

There comes a point in the life of every data person that we have to graduate from csv files. At a certain point the data becomes big enough or we hear talk on the street about other file formats. Apache Parquet and Apache Avro are two of those formats that been coming up more with the rise of distributed data processing engines like Spark.

Read more
Complexity is in the eye of the beholder.

ml pipelines

Building Machine Learning (ML) pipelines with big data is hard enough, and it doesn’t take much of a curve ball to make it a nightmare. Most of what you will read online are tutorials on how to take a few CSV files and run them through some sklearn package. If you are lucky, you might find some “big data” ML stories on Medium where someone uses Spark to crunch a bunch of JSON, Parquet, or CSV files at scale of 10 to a few hundred gigabytes of data. Usually they are simplistic and ambiguous. Unfortunately that isn’t how it works in the real world.

Read more

On again, off again. I feel like that is the best way to describe Apache Airflow. It started out around 2014 at Airbnb and has been steadily gaining traction and usage ever since, albeit slowly. I still believe that Airflow is very underutilized in the data engineering community as a whole, most everyone has heard of it, but it’s usage seems to be sporadic at best. I’m going to talk about what makes Apache Airflow the perfect tool for any Data Engineer, and show you how you can use it to great effect while not committing to it completely.

Read more
Exploring Elasticsearch with Python.

What’s Elasticsearch precious? I feel like Gollum when confronted by taters. Elasticsearch has been around for awhile now, based on Lucene, it’s become a well known name in the field of text and semi structured data storage, analysis and retrieve category. Even though it’s popular enough to get name recognition I’ve rarely run across it in the wild. We are going to dip our toes into Elasticsearch by working on a small project to store and search a book(s). It just give us enough simple problems to solve that by the end we should have at least a basic understanding of how to connect, store, and retrieve simple documents with Elasticsearch.

Read more

Ah. What a classic. The one piece of code that I end up writing over and over again, you would think I would have stashed it away by now. Not going to lie I usually have to Google it, while thinking, is this the right way? Should I just open the csv file and iterate it? Should I import the csv module? Should I just use Pandas? Does it matter? Probably not.

Read more
A fight to the death. A comparison of geo-spatial tools in Python. What’s easy and fast to use.

It’s a fight to the death people… that’s why it’s called Thunderdome. This will be no different. Last time we talked about the very basics of the strange world of geo-spatial tools for data engineering. The next most obvious thing do of course is to see what tool is the best. By best I mean what tools can be used to load and do simple manipulation of data in a fast and relatively simple manner.

Read more
Quick view of geospatial data landscape.

What does a data engineer need to know about working with geospatial data? I’m going to give my two cents on what is and is not important. First, prepare to be annoyed as you will most likely spend hours debugging strange and not obvious errors and bugs. You should run screaming the other way, but in case that is not a option, here are the basics.

Read more
Async file operations in Python, juice worth the squeeze?

What I’ve greatly feared has come to pass. I’ve come to love on of the most confusing parts of Python. AysncIO. It has this incredible ability for data engineers building pipelines in Python to take out so much wasted IO time. It saves money. It’s faster. People think you’re smarter than you are. Tutorials are one thing but implementing it in your complex code is typically mind bending and a test of your patience and self-worth.

Read more

Raise your hand if you’ve every used Dask? ……. Me either. With tools like Spark, I’ve only recently started seeing some articles and podcasts pop up around Dask. Written in Python and claiming to be a distributed data processing framework I figured it was about time to check it out. Suprisingly when reading up on the Dask website, it appears they don’t necessarily claim to be out to replace things like Spark. Time to kick the tires.

Read more
Waiting for large files to download is boring.

There is nothing more annoying than sitting around waiting for files to download. That was true while I was in high school staring at LimeWire, it’s still true today. Especially when you’re a data engineer who’s supposed to make data pipelines fast. You’re in luck! Yes, it is possible to download a large file from Google Cloud Storage (GCS) concurrently in Python. It took a little digging in Google’s terrible documentation for their Python cloud storage wrapper (hear my snarky-ness), but I found a diamond in the rough.

Read more