Not going to lie. I’ve been trying to figure out for awhile where Apache Flink fits in the Data Engineering world for awhile now. A year or two ago I didn’t seem much content posted about it, but it seems to be picking up stream. I’ve mostly managed to avoid understanding what Flink is or does, but I figured it’s time to give my brain a much needed workout. When I was ignoring Flink, I just chalked it up as another messaging/streaming system like Kafka or Pulsar. Apparently I was wrong … no surprise there.

Read more

Every good story starts with a few different characters right? It’s like the spice of life, little bit of this, little bit of that. It’s the way of the world. In all my data wandering I’ve come across lot’s of different types of data engineers. I can usually put them into three different categories, somewhat similar but in many ways quite different.

Read more

I am always amused by the apparent contradictory nature of working in the world of data. There is always bits and pieces that come and go, the popular, the out of style … new technology driving new approaches and practices. One of the hot topics the last decade has produced is DevOps, a now staple of most every tech department. Like pretty much every other newish Software Engineering methodology, data world has struggled to adopt and keep pace with DevOps best practices. Once these is always a thorn in my side, making my life more difficult. The simplicity with which it can be adopted is amazing, and the unwillingness and lack of adoption is strange.

Read more

There are few things in life that are worse then cracking open some serious PySpark pipeline code, and then realizing there isn’t a single function written to encapsulate logic … wondering if some change you are about to make will bring down the whole pipeline. When you are new to a codebase you don’t know what you don’t know, you don’t have any backstory and you are usually flying by the seat of your pants in the beginning. When you have no unit tests, usually the only other way to test changes on a Spark pipeline are to run it …. which is sometimes easier said then done in a development environment. The first line of defence should be unit testing the entire PySpark pipeline.

Read More

It seems like today the problems and challenges of Data Engineering are being solved at a lightning pace. New technologies are coming out all the time that seem to make life a little easier (or harder) while solving age old problems. I feel like Machine Learning Ops (MLOps) is not one of those things. It’s still a hard nut to crack. There have been a smattering of new tools like MLFlow and SeldonCore, as well as the Google Cloud AI Platform and things like AWS Sage Maker, but apparently there is still something missing. Nothing has really gained widespread adoption … and I have some theories why.

Read more

If you’re anything like me when someone says Delta Lake you think DataBricks. But, the mythical Delta Lake is an open source project, available to anyone running Apache Spark. It seems also too good to be true, ACID transactions on the Spark scale? Incredible. This is the future, it has to be. The lines of what is a data warehouse have been starting to blur for a long time, I have a feeling Delta Lake will be the death blow to the traditional DW … or its rebirth??

Read more

I am going to peer into the crystal-ball, the seeing stone, looking into the murky future of Data Engineering to see what mysteries it holds. I’ve seen a story, a tale of two Data Warehouses, I’ve seen Machine Learning, Streams, Distributed Systems, Storage, the eternal SQL. A lot has changed in the world of Data Engineering in the last few years, but a lot has not changed in the data world as well. Articles about the end of ETL the rise ELT, Hadoop being dead, new data paradigms, no code data flows, managed services, yet very little has actually changed, or it does at a snails pace. Yet, inevitably the store and future of data engineering can be told through the tale of two data warehouses.

Read more

In part 1 of the big data file formats we reviewed Parquet vs Avro. It was apparent from the start that the two file formats were built for different things. Avro is clearly a complex row structured file format used in communication and transactions, where schema is king and nested structures are no problem. Parquet on the other hand has risen to the top with the popularity of Spark, is columnar based storage and is well suited to structured and tabular type data. But, lest the annals of inter-webs call us uncouth and forgetful, we must add ORC file format to the list.

Read more

Don’t you like stuff for free? Don’t you like it when stuff I just handed to you? I mean when is that last time you didn’t want to get a free t-shirt. How about 20 bucks in the mail from you Grandma? That’s kinda what Pipelines are in Spark ML. The Apache Spark ML library is probably one of the easiest ways to get started into Machine Learning. Leaving all the fancy stuff to the Data Scientist is fine, Data Engineers are more interested in the end-to-end. The Pipeline, and the Spark ML API’s provide a straight froward path to building ML Pipelines that lower the bar for entry into ML. So, set right up, come get your free ML Pipeline.

Read more

With parquet taking over the big data world, as it should, and csv files being that third wheel that just will never go away…. it’s becoming more and more common to see the repetitive task of converting csv files into parquets. There are lots of reasons to do this, compression, fast reads, integrations with tools like Spark, better schema handling, and the list goes on. Today I want to see how many ways I can figure out how to simple convert an existing csv file to a parquet with Python, Scala, and whatever else there is out there. And how simple and fast each option is. Let’s do it!

Read more