I was recently confronted with an interesting conundrum when writing a complex data pipeline. It was an interesting problem that arose from my quest to reduce complexity in part of the design, which found itself creeping into another part, re-enforcing the classic idea of whether you can really make the complexity pea go away, or if you simply shuffle the pea somewhere else to hide it.

Read more

Being Data Analytics is a meat grinder, it’s the worst job ever. Horrible it is. It will crush you.

Back in March, I did a writeup and experiment called DuckDB vs Polars, Thunderdom, 16GB on 4GB machine challenge. The idea was to see if the two tools could process “larger than memory” datasets with lazy execution. Polars worked fine, DuckDB failed in spectacular fashion.

I also noted how many people had opened issues in GitHub about this very thing, but the issues were either ignored or closed. Someone on YouTube said some of these OOM issues were fixed in recent releases.

Read more

I recently did a post on Linkedin and Reddit about Databricks removing Standard Tier and forcing folks into Unity Catalog. The post got big traction and blew up, more than I thought. Enough for the Databricks folk to hunt me down at work and tell me I’m naughty.

I will be writing a more in-depth post soon on Substack about the downsides of Vendor Lockin and how Data Teams should think about such things.

I never thought I would live to see the day, it’s crazy. I’m not sure who’s idea it was to make it possible to write Apache Spark with Rust, Golang, or Python … but they are all genius.

As of Apache Spark 3.4 it is now possible to use Spark Connect … a thin API client on a Spark Cluster ontop of the DataFrame API.

You can now connect backend systems and code, using Rust or Golang etc, to a Spark Server and run commands and get results remotely. Simply amazing.  A new era of tools and products is going to be unleashed on us.  We are no longer chained to the JVM. The walls have been broken down. The future is bright.

Have you ever wondered at a high level what it’s like to build production-level data pipelines on Databricks? What does it look like, what tools do you use?

Ever wondered how to build and end-to-end project for an Open Source Python Package that gets published to PYPI? I built out lakescuman open-source package to help with Databricks Unity Catalog Delta Lake tables querying with Polars, DuckDB, or PyArrow. https://github.com/danielbeach/lakescum

Unless you’ve been hiding a rock you’ve probably heard the hubbub over Devin the new AI Software Engineer that is going to take your job.

While this is a genius piece of marketing … it’s a bunch of crud.

Never fear, you are in no more danger of losing your job in Software than when ChatGPT and CoPilot hit the market. In fact, the opposite is true, there will be more Software jobs than ever, not less.

AI tools might look pretty in a video, but in reality, by those who use them, the are very bad at programming still and don’t do a very good job. Most of what a Senior Engineer does that makes them Senior is not that spitting out of gobs of code. Any dingdong can do that.

I recently did a challenge. The results were clear. DuckDB CANNOT handle larger-than-memory datasets. OOM Errors.  See link below for more details.

DuckDB vs Polars – Thunderdome. 16GB on 4GB machine Challenge. 

 

Recently an Architecture at Databricks recommended people use Notebooks for Production workloads. Very bad and horrible idea. Very expensive compute for most people (All Purpose Clusters) and it leads to horrible development practices. It set off a firestorm on Linkedin when I commented people SHOULD NOT follow this advice.

Read here and here