Data clouds are incredibly fast— retrieving data from massive sets is easy and quick for even the largest volumes of data. The computing power you can bring to bear with Snowflake data warehouses is almost limitless. For us who began working with data in the 90s, 80s, or earlier—it’s the stuff of dreams.
But it does begin to elicit some questions about the practices we developed to work with constrained computing and storage. And, whether those are still necessary in a data world seemingly without limits. To help answer that, I’ll share my journey from technical writing to data modeling and my observations on what is here and lies ahead.
I began my career in tech as a technical writer in the late ‘90s, but that’s not exactly what I’m here to tell you about. I spent my first six months documenting databases for a department of DBAs, then suddenly made the leap into data modeling—the credit card company I worked for needed data modelers fast, and was willing to train me ASAP.
My career in data had begun and I’ve never looked back. I took a series of courses covering conceptual, logical, and physical data modeling, followed by training in SQL, ETL, and database administration.