Dispatches from Day Three of the Operational Analytics Conference
Disasters at work are never a fun topic when you’re in the moment, but always fun to look back at once they’re far behind you. … Far, far behind you. Our panel on day three of the Operational Analytics Conference focused on the messy, painful, confidence-scarring mistakes some of the top data pros in the industry have had.
With us to share their horror stories (along with a lot of good stuff too – after all, most bad experiences at work become learning experiences) was
- Julie Beynon - Head of Analytics at Clearbit
- Jeff Sloan - Data Product Manager for Treatwell
- Olya Tanner, Customer Data Architect here at Census
- Brooks Taylor - Lead Data Scientist at Segment
- Eric Xiao - Senior Data Platform Engineer at Shopify
There were a lot of fun (in retrospect) stories, but Brooks’ “The Mystery of the Missing $5 Million” definitely stood out.
Brooks Taylor: I feel like I'm making more mistakes as I go along, which may be it's not how learning is supposed to work, but I feel like as you get trusted with more and more things, you have the opportunity to ruin more and more things. My most recent one happened when I was building some revenue tables. Our business has a self-service and then we have an invoice side of the business, and they need continual reconciliation, so I'm trying to build the source of truth to let us do that. In order to track revenue, you have to have all of your historical data laid out in front of you, because we track it incrementally. To know what someone's paying us now, you have to know what they'd been paying us since whenever they first became a customer.
I'm looking for old historical records and I can't get the data I need, and I can't sync it in. Our syncing tool is an incremental sync, and because these records are three years old, they're not going to come across. So, I'm like, easy – I'll just put a touch on them. I won't change the data, I'll just update the timestamp of last updated. That'll let me pull it in, I can calculate it, we can get this analysis done and run forward. If I were to liken it to a physical metaphor, it would be like the data was like a golden idol in “Raiders of the Lost Ark”, and all I needed to do was put the weighted bag in place of the idol. If I could just swap that in really quick, no one would even notice, and I would have what I needed and I didn't need to tell anyone.
Anyone who's seen the movie knows exactly what happens next, which is of course I sprung every one of the temple traps. I ended up triggering all these other underlying calculation processes that relied on the same data. I showed up to work two hours later and my entire floor is just flying around, and everyone's like, "We've lost $5 million. We don't know where it went. Do you know anything about that?" I was like, "I know something about that. Yeah, sure, sure. Can we talk about what that looked like when that happened?" We traced it back and it was definitely 100 percent my fault. I think a big lesson I learned is that even if it's old data, the ecosystem is really, really connected at the end of the day, so if you think you're making a change, even if it looks like no change at all, it's better to have a little better context of what else is also looking at that same data.
I have never lost $5 million dollars in a data equation before, but I certainly have made mistakes that felt like I had. If you sign up for a demo of Census, maybe I’ll tell you one of my stories.
Listen to the entire discussion below and be sure to check out the other sessions as well.