I am going to peer into the crystal-ball, the seeing stone, looking into the murky future of Data Engineering to see what mysteries it holds. I’ve seen a story, a tale of two Data Warehouses, I’ve seen Machine Learning, Streams, Distributed Systems, Storage, the eternal SQL. A lot has changed in the world of Data Engineering in the last few years, but a lot has not changed in the data world as well. Articles about the end of ETL the rise ELT, Hadoop being dead, new data paradigms, no code data flows, managed services, yet very little has actually changed, or it does at a snails pace. Yet, inevitably the store and future of data engineering can be told through the tale of two data warehouses.

Read more

In part 1 of the big data file formats we reviewed Parquet vs Avro. It was apparent from the start that the two file formats were built for different things. Avro is clearly a complex row structured file format used in communication and transactions, where schema is king and nested structures are no problem. Parquet on the other hand has risen to the top with the popularity of Spark, is columnar based storage and is well suited to structured and tabular type data. But, lest the annals of inter-webs call us uncouth and forgetful, we must add ORC file format to the list.

Read more

Don’t you like stuff for free? Don’t you like it when stuff I just handed to you? I mean when is that last time you didn’t want to get a free t-shirt. How about 20 bucks in the mail from you Grandma? That’s kinda what Pipelines are in Spark ML. The Apache Spark ML library is probably one of the easiest ways to get started into Machine Learning. Leaving all the fancy stuff to the Data Scientist is fine, Data Engineers are more interested in the end-to-end. The Pipeline, and the Spark ML API’s provide a straight froward path to building ML Pipelines that lower the bar for entry into ML. So, set right up, come get your free ML Pipeline.

Read more

With parquet taking over the big data world, as it should, and csv files being that third wheel that just will never go away…. it’s becoming more and more common to see the repetitive task of converting csv files into parquets. There are lots of reasons to do this, compression, fast reads, integrations with tools like Spark, better schema handling, and the list goes on. Today I want to see how many ways I can figure out how to simple convert an existing csv file to a parquet with Python, Scala, and whatever else there is out there. And how simple and fast each option is. Let’s do it!

Read more

I’ve always been surprised with the rise of data engineering and big data, how hard it is to find good data engineering content that is somewhat regular. Tech moves fast and I feel like data engineering moves even faster. There are always new tools and systems coming out with regular frequency, it’s hard to keep up with what’s hot and whats not. But, I still think it’s important to keep a finger on the pulse of what tech stacks are starting to take over (Spark) and what is fading into oblivion. So here is my top ten list of data engineering blogs, these are the places that I frequent so I at least know what’s going on in the world of data engineering.

Read more

I’ve always been surprised at the distinct lack of most Python code I’ve seen using the map() and filter() methods as standalone functions. I’ve always found them useful and easy to use, but I don’t often come across them in the wild, I’ve even been asked to remove them from my MR/PR’s, for no other reason then that they are supposedly ambiguous to some people? That’s got me thinking a lot about map() and filter() as related to readability, functional programming, side effects and other never ending debates where no one can even agree on the “correct” definition. Seriously. But, I will leave that rant for another time.

Read more

Ever felt like just exploring documentation… seeing what you can find? That’s what you do on a cold, first snowstorm of the year Sunday afternoon. After the initial fun has warn off, the kids don’t want to go outside anymore, and Netflix has nothing new to offer up. So I thought I might as well spend some time poking around the PySpark Dataframe API, seeing what strange wonders I can uncover. I did find a few methods that took me back to my SQL Data Warehouse days. Memories of my old school Data Analyst and Business Intelligence days in Data Warehousing… the endless line of SQL queries being written day after day. Anyways lets dive into the 4 analytical methods you can call on your PySpark Dataframe, buried in the documentation like some tarnished gem.

Read more
Who who? Apache Cassandra, who?

Hmm… yet another distributed database …. will it ever end? Probably not. It’s hard to keep up with them all, even the old ones. That brings me to Apache Cassandra. Of all the popular big data distributed databases Cassandra seems to be kind of that student who always sits in the back row and never says anything… you forget they are there…. until someone says their name….. Apache Cassandra. I honestly didn’t even know what space Cassandra fit in before trying to install and use it… so this should fun. What Is Cassandra? Distributed NoSQL.

Read more

I’ve meet my fair share of snooty people who poo poo SQL and databases as second class hand-me-downs. I still remember talking to an academic computer science grad who was explaining to me how he refused to teach database classes, he was just too good for that. Whatever. Apparently refusing to accept how 90% of companies are able to operate as data driven businesses just isn’t important to some people. There is probably nothing more important in the tool belt of a data engineer than being above average at SQL and databases. Tuning queries, writing queries, indexing, designing data warehouses. I’m sure there are some Hadoop data engineers who skipped this step of RDBMS world, but that is not the normal path of a data engineer. Let’s dive into the fundamentals of SQL and databases.

Read more
Trying to learn Scala drives me crazy.

I seriously don’t know why I keep doing this to myself. I know learning new things I something I need to do, but why Scala? I’m perfectly happy writing Python all day long. It’s straight forward and concise, no boilerplate, no re-inventing the wheel. I’ve written pipelines that crunch hundreds of TBs of data in Python, so all the snotty people who complain about Python not being fast enough or whatever can go hangout with this cow, looks like he could use a friend. This is something I’ve been meaning to do for awhile. Use Scala to read some text file(s), and store the data somewhere with some client. I chose ElasticSearch. I really just wanted practice doing something simple like reading files and I was curious about how good the Scala clients are for popular tools.

Read more