I recently did a post on Linkedin and Reddit about Databricks removing Standard Tier and forcing folks into Unity Catalog. The post got big traction and blew up, more than I thought. Enough for the Databricks folk to hunt me down at work and tell me I’m naughty.

I will be writing a more in-depth post soon on Substack about the downsides of Vendor Lockin and how Data Teams should think about such things.

I never thought I would live to see the day, it’s crazy. I’m not sure who’s idea it was to make it possible to write Apache Spark with Rust, Golang, or Python … but they are all genius.

As of Apache Spark 3.4 it is now possible to use Spark Connect … a thin API client on a Spark Cluster ontop of the DataFrame API.

You can now connect backend systems and code, using Rust or Golang etc, to a Spark Server and run commands and get results remotely. Simply amazing.  A new era of tools and products is going to be unleashed on us.  We are no longer chained to the JVM. The walls have been broken down. The future is bright.

It’s been a while since I wrote about Polars on this blog, I’ve been remiss. Some time ago I wrote a very simple comparison of switching from Pandas to Polars, I didn’t put much real effort into it, yet it was popular, so this is my attempt at trying to expand on that topic a little.

Recently, while laying flat on back on my sunporch soaking up the vitamin D beating down on me, dreaming about code, which I always do, it struck me.

Read more

Have you ever wondered at a high level what it’s like to build production-level data pipelines on Databricks? What does it look like, what tools do you use?

Ever wondered how to build and end-to-end project for an Open Source Python Package that gets published to PYPI? I built out lakescuman open-source package to help with Databricks Unity Catalog Delta Lake tables querying with Polars, DuckDB, or PyArrow. https://github.com/danielbeach/lakescum

Want to know how to grow to the Senior Engineering position? Take a look.

Most Software Engineers think of themselves as too smart. They think they are the best and brightest coder alive or that has ever lived. Doing so, they stunt themselves from becoming Senior Engineers and become hard to work with, the nightmare of the PR process.

You don’t need to be the smartest person in the room.

Unless you’ve been hiding a rock you’ve probably heard the hubbub over Devin the new AI Software Engineer that is going to take your job.

While this is a genius piece of marketing … it’s a bunch of crud.

Never fear, you are in no more danger of losing your job in Software than when ChatGPT and CoPilot hit the market. In fact, the opposite is true, there will be more Software jobs than ever, not less.

AI tools might look pretty in a video, but in reality, by those who use them, the are very bad at programming still and don’t do a very good job. Most of what a Senior Engineer does that makes them Senior is not that spitting out of gobs of code. Any dingdong can do that.

I recently did a challenge. The results were clear. DuckDB CANNOT handle larger-than-memory datasets. OOM Errors.  See link below for more details.

DuckDB vs Polars – Thunderdome. 16GB on 4GB machine Challenge. 

 

Recently an Architecture at Databricks recommended people use Notebooks for Production workloads. Very bad and horrible idea. Very expensive compute for most people (All Purpose Clusters) and it leads to horrible development practices. It set off a firestorm on Linkedin when I commented people SHOULD NOT follow this advice.

Read here and here