Data science operationalization is a simple enough concept.
But in practice it can be a complicated and often overwhelming challenge.
Nishan talked about:
Welcome to the Data Science Leaders podcast. Before we get going, you are based in Salt Lake City, Utah, correct?
Yeah, I am. Proud resident of Utah.
Proud resident of Utah. So folks I know who live in Utah are also great skiers. I imagine you’re a great skier as well. Is that correct?
Talk to me in five years and maybe I’ll get there. For now I’m just learning. I just started getting on the blues and the blacks this season, and it’s been so much fun.
Nice. It is a blast. Be safe out there, but I started my skiing journey about 20 years ago and it’s been a lot of fun. It’s an expensive hobby, but it is a lot of fun as well. It’s beautiful, it’s gorgeous, but you’ve also got to be safe. I’ve never skied in Utah, but I hope to at one point.
You are missing out, my friend.
I know, I know, I know.
Definitely come by at some point.
I will try, I will try. All right. So Nishan here is the VP of Algorithms at Overstock. He’s seven years into his career and has clearly shot up the ladder, which we’ll talk about in a bit here, which I find very interesting and reached the VP rank. So before we get going, we get a lot of titles here in the Data Science Leaders podcast. Chief Data Scientist, VP of Analytics, and more. VP of Algorithms…I’ve seen the algorithm title, what does VP of Algorithms mean to you when compared to your other DSL colleagues?
I guess the two I’ll contrast with, one is the Chief Data Scientist title, and then maybe more of the analytics heavy titles. To me, a VP title as opposed to Chief Data Scientist, means you’re an organizational leader. So your function is not just application of science, but the management of the team and the people that apply the science. And the contrast with analytics is, I think of what we do as building algorithmic products. Really driving business innovation, changing how we do business by building new capabilities. So it’s very product oriented that way. We’re not just doing analysis to inform decisions, but hopefully to put products out there that inform decisions, and that I think is a pretty important nuance. I self identify, I guess, with my title quite a bit in it being an organizational role and needing to put emphasis and care on the organizational side of things as well. And also, we need to launch products and to get things out there. So it’s more than analytics in that regard.
The topic of today’s podcast is really operationalizing data science and building machine learning data science type products. So we’ll definitely dive into that throughout the conversation today. In seven years, you’ve shot up to being this VP of Algorithms at Overstock. One of the things that I’ve also noted so far is a lot of our data science leader colleagues happened to have a PhD, and you do not. So my question is, when are you getting your PhD because clearly you’re inadequate?
Yeah, as soon as possible! I’ve been meaning to do it. It’s just this job gets in the way.
Right, right, this job thing. But my guess, do you have plans of eventually getting a PhD, going back to school at all or no?
I don’t know. It’s something I’ve contemplated for awhile and I probably won’t end up getting a PhD until maybe after I retire. I do want to get a PhD in a completely different topic at some point in time. Something around philosophy, but until then…
That’s another podcast altogether. We’re going to try to stay on topic, I’d love to dive into your philosophical leanings, but…
No. But till then, I don’t think so. To me the biggest value of not having a PhD is how humble it keeps me. I’m not humble bragging by any means, but more than half of my organization has PhDs. And so these folks are experts in their respective domains. They come in with a lot of knowledge and so it helps me approach every conversation with humility and learning where I’m like, they certainly know a lot more than me in regards to these specific things. I think that helps me become a better listener because one thing I’ve found is listening is a really important skill for leadership and it’s probably one of the hardest things to do. The other thing is everything is super fascinating and interesting to me. It allows me to do these dives into bodies of research that I had no idea about. And I have all my colleagues to help me in that journey. So not having a PhD has paid off really well for me so far.
Well, that’s a very healthy point of view, right? Is to see yourself as, even as a leader—you don’t have to be the smartest person in the room all the time. You don’t have to be the expert in every single area, you can’t given the breadth of data science…not to say you can’t, but it’s very difficult. And to see your team as this collection of experts is certainly interesting. So I applaud you with your healthy outlook on managing. I think that’s a great way to look at it and I’m sure it engenders to your team, I’m sure they appreciate your listening and your style. I think that’s fantastic.
I think the first thing that it gets to is there’s no tension, right? I like to think the titles disappear. When I come to them as experts and I’m trying to learn, they get a lot more comfortable. They think more and they’re more open to expressing more of the out-there ideas and they might have something really interesting to work with at that point in time. The other thing is, I think for me… It’s not just me, right? There’s experts in all these other domains, but I think they could be learning from each other as well. So that is a really big emphasis I put in terms of how the organization is structured. Learning is front and center to the function of the organization. And so we have a bunch of things going that foster that culture.
Let’s start diving into how your team is organized. I do want to get into you seeing the work that you’re doing being embedded as a product, we will absolutely get there, but this is a fascinating topic. When you say in their areas of expertise, what areas do you have experts in that really help out Overstock and your team?
It may be good to give a very brief introduction. Overstock is an ecommerce platform. We’re a retail store so we handle all parts of the retail chain from operations and logistics, delivery, to the marketplace itself, and then discovery on the marketplace. As the algorithms unit all of these business problems, the algorithmic side of these problems, come to my organization. But mostly we focus on search and recommendations around discovery, helping users find what they’re looking for really. Then other parts of the business are around marketing, focusing on marketing challenges, how much do you spend? What kinds of targeting do you do? Supply chain forecast in terms of delivery estimates, shipping costs…so it’s a wide variety of problems that way.
The way we’ve grown the organization is to try to find experts in respective domains, respective parts of these problems. Around search, we have people focused on NLP, we have a lot of image modeling expertise, we have traditional IR research expertise and those are some major ones. And then around OR, operation research, and that’s an evolving field as well. We have quite a few people leading on statistical expertise as well. Various schools or various ways they approach it. And also a new role I’m really excited that we have in the organization now is that of an experimentation scientist. Our newest member to the organization, she has an econometrics background and game theory background. We have a sponsored product platform that we maintain in-house as well. So we deal with a lot of economic factors in that as well. Econometrics and game theory and mechanism design and choice theory and all of these are expertise that we like to foster.
What is the goal of the experimentation scientist? Is it more R&D focused?
It’s uplifting our experimentation practices and understanding of how we conduct experiments.
Got it. So a design of experiments type?
Yeah. Design of experiments. So it could be randomized controlled trials. Like AB tests that we run on the site. But also there’s I think more interesting counter factual questions we could be asking. Actually causal theory is another thing that we’re probably going to lean more towards in the future. But basically saying our statistics, our numbers describe the way things are today, right? So if you abstract away from that reality in terms of numbers to a model that describes the reality, then you can say what would happen if things were a different way? And I think the counterfactual and causality research is doing some really good work in these directions. Being able to ask these counterfactual questions. These are applicable in all different ways.
One example is, what would happen if you were to not show sponsored results? It’s one form you can ask the question. And also ask what would happen if you didn’t send out a coupon or didn’t spend this much on this advertising channel? Or you made this decision as opposed to that decision? Now, these questions have existed for a very long time, but I think these approaches, causal relationship, being able to capture those allows us to answer those questions a whole lot better. I think it has the potential to give us a lot more meaningful approach to data-driven decision-making in the company.
That is very, very cool. I can imagine you must have a close partnership with the folks on the business side, because a lot of those questions I’m sure they’d be dying to know the answers to, is that the case?
Absolutely. As an executive, we’re as a general body, trying our best to come up with the right answers to these questions. A lot of the strategy questions I think can be a lot better answered with better data. The right modeling of the business domain, of the business challenges, I think is essential to be able to come up with the right answers there. You solve this problem, it’s applicable to can any business in any industry really.
That’s great. It sounds like in your role as a VP of Algorithms you’re not just, and we’ll talk about this again in a sec, you’re not just talking about embedding some of the models into production settings, into Overstock’s public-facing site. But you’re also involved in some of the strategy conversations as well. Trying to understand what would happen if we didn’t spend as much on this campaign or that campaign and that kind of thing. Some bit of “what if” analysis. That’s fantastic. Let’s dive into the core topic of the show. I think a lot of data science leaders out there are very curious about the topic of productionalizing models in general, and really embedding them into customer-facing or user-facing places. Search is obviously a big area, when you look at your career it’s been a big focus. Talk to us just a bit, first of all, just dumb it down for people like myself on what it is that you do to improve search and what does a day-to-day look like from your perspective?
As of maybe 10 or more years ago, but various industries have seen transformation in different times, search used to be mostly driven by information retrieval. It was all about what information is in the document and then how do you surface that? I think a lot of the evolution with machine learning on search has come to, well, what is the user looking for? And different users might be looking for different things. In ecommerce, that gets further complicated with, well, a user might want to look at something, but they might want to buy something that’s a little different from the things they enjoy just looking at. What are those differences and how do you model those challenges? So that broadly sets the context for the search problem that we look at.
Let me try to put it in my own words. You’re saying search started with just looking at the keywords that were typed in and making sure that what was retrieved and the search results matched logically the words that were found. But truly what search is all about is understanding what is the person who’s searching actually looking for? The words that they put in might not be exactly the right words. That’s a great way of looking at it, for sure. I can see that being a much more difficult challenge.
Yeah. We look at it as moving from search for X, get X, is how it’s laid out. But then search with the intent of X and get X is where we’re evolving towards. Different people might have different intents and being able to capture that and all the nuances of it and being able to serve every customer, whether it’s collecting everything that everyone might want to into this one experience and showing it to everyone or knowing better what you personally want and showing you those things. Those are the trade-offs and challenges we work through in terms of search. A lot of the work we do on search is getting closer and evolving more towards these kinds of being able to truly capture the intent of what the customer is looking for. That’s where we’re going.
Then it also gets blurred with recommendations. A lot of times recommendations are discovery-oriented. It’s you’ve done some things with us in the past or this is what we think you might like and those get surfaced, but then the only distinction I think there’s a conscious action with a search and perhaps with the recommendation, at least on the surface, there’s no specific action that the user is undertaking. But then that also gets blurry, right? Because they do scroll to look at recommendations. If you’re showing good recommendations they’re probably scrolling through your site. I think the journey we’re on in this sense is, how do we give them what they’re looking for with less and less friction?
How do we give the customers without them needing to really…maybe it was typing the exact magic keyword to get the products you’re looking for…This has really hit home for me at Overstock because oftentimes I don’t know the name of what things are called. Like, what’s an ottoman? Am I looking for a bookshelf? I’m I looking…and so those lines get super blurry and as a customer coming to the platform, I have no idea what to call the thing I’m looking for. Either I can look at the structures that exist, which is navigation. I look at the structures, how people have organized things, and maybe try to find my way there. Or I just type whatever comes to my mind and I’ve started my journey into discovering what I’m looking for.
I think it’s the search and recommendations, the discovery problem is evolving and we’re just at the start. I think we’re going to end up with really interesting experiences that give you what you’re looking for with a lot less friction and help you find things that you didn’t even know you were looking for, but happened to suit your needs fairly well.
Right. Basically reading the mind of the person who’s searching, is really the goal. Speaking of which, how do you improve upon this search? To improve upon it, there has to be some yard stick, some metric that you’re looking to measure. How does that process work?
Absolutely. We have, broadly classified, online and offline metrics. Online metrics are what the users did and offline metrics are what we use to inform our models and train our models on. There’s very standard metrics out there, NDCG is something that’s very standardized. It’s normalized, discounted, cumulative gain, essentially saying if you know what kind of things are rewarded, which you usually end up getting either by human labels or by looking at what users have done in the past. So once you have that label, how do you rank the things that are awarded? Once you define the award and it’s normalized, discounted because it’s bounded between zero and one, but that’s standard in terms of metric calculation. But what it means to be rewarded, that various different across different systems and challenges. So that’s the offline ones.
And then there’s online metrics, your standard ones you’d expect click-through rate. And for ecommerce, there’s conversion, adding to cart could be something that’s meaningful as well. Then once you have some solution, you want to go deeper into solving some specific problem, there might be other interesting challenges as well. Like how long did the user stay on the platform? Or were we effective at getting people to the next stage of the funnel? Which is, if you’re on a search page and you found something, you probably want to learn more about that product. So you go to the product description page, that can be an online metric you track as well. Or frustration of the user can be a different metric you capture as well. How many times did the user search for something very similar? That means they’re probably not finding what they’re looking for. So yeah, there’s plenty of metrics.
In my experience, time on site, as an example, can be a tricky one because time on site, if the user is on the site for a long period of time, it could be just because they’re having trouble finding the product, which could be a bad thing. On the other hand, if they’re on the site for a long period of time and they’re buying things all along the way and finding what they need, then that’s a fantastic experience. You have to mix, I imagine, that time on site with some other metrics in order to understand whether or not they’re actually having a good experience and a good outcome. And then there’s good outcomes for the buyer, the user, and there’s good outcomes for Overstock too. And you also have to mix that in.
Well, that’s actually really interesting because I think a large part of successfully solving these problems is aligning interests. How do we make sure our partners have the same interests as Overstock, as our customers are having? I strongly believe it comes down to the customer. If you’re serving your customer well, you’ll have a good business and you’ll have good partners that work with you. The things I find super fascinating, especially with some of the marketing products that we build, like the auctions, is how do we align everything towards the user’s intent and serve that better?
Yeah, all about the user. If you service their needs and keep them at the top of the list, their lifetime value will only increase. Let’s talk a little bit about how you actually roll out changes, let’s say to search, on a regular basis. Can you talk through a little bit about the process of that? I think people are very interested in taking the workbench data science and experimentation work, then the process of getting that into production. Does your team own that? Do you partner with maybe folks on the IT side who might own the site? Tell us more about that.
Let’s take a minute to talk about the process of science and machine learning. To discover anything worthwhile, the scientist takes a particular thing, puts it in a workbench, it’s a very physical form, right? And then very carefully teases out signal from noise to get to a point where they’re like, okay, with all of these right parameters, with this very specific precise ways to do things, I think I’ve discovered something useful. I’ve discovered something that adds value. Now there’s this whole challenge of having discovered that. In its pure, pristine form with all the correct parameters, how do you get that into a production context? If you have to involve a lot of people in that process, there’s lots of chances for miscommunication and error.
It’s like the productionalizing engineer didn’t realize how important it was to preserve all of these metrics. Or all of a sudden your data started changing and they didn’t realize that it was really important that the data preserve its form. Lots of those things can happen. The way we’ve approached this is to empower the teams specifically to own the end-to-end problem. Let’s say, ranking as an example. For search, two primary teams that build our search systems are core search and then ranking. Core search handles the candidate set selection problem, and lots of the systematic challenges, and then ranking is responsible for the re-ranking problem. Given a set of products how do I best align them for the customer? There’s at least the current way we have it set up. But in each of these teams, they’re owning individual systems that they deploy, they manage, they look at dashboards for.
So they’re responsible for not just the discovery, the problem formulation, the work, the development, the science on it, as well as the production and maintenance of it. It’s very similar to DevOps as a mindset and further extension of it to the machine learning space.
This is gold right here, Nishan. One thing that I’ve seen, is that it is not uncommon that you have a data science team doing data science workbench work and then they pass it off to, say, an ML engineer or I think you called it a productionization engineer, that might be in a separate team altogether, right? And they even will take the model, potentially rewrite it, and make sure that it can work in their downstream production system. In this case it would be your .com site. You’re saying no, the teams at Overstock are aligned by product, in this case it could be your search ranking team and that includes the ML engineer along with the data scientists. Talk a little bit about what those teams look like.
We’re really responsible for algorithmic products. And so we are structured from a product perspective that way, but Algorithms is the center of excellence for machine learning and AI. Because that’s really important too. At our size, we’re probably 5% of the company in terms of total number of people focusing on a problem. I think it’s really important for us to preserve that excellence and foster a culture where people learn from one another and grow. As we talked about 10 minutes ago that culture is super important, but then we still need a product orientation. We spend a lot of time in the way we design these teams end up having a lot to do with the product orientation. What is the right product that this team can own and maintain?
And those boundaries do evolve over time, but then once we do that, we make sure they’re self sufficient. So what skill sets does the team need for it to be able to take it all the way home? We have machine learning teams whose core focus is machine learning, but then also have people that are maybe growing towards that journey. Or handling more of the engineering sides of machine learning. So broad classification-wise, we have machine learning scientists and engineers. And they’re all part of the team and there’s no defined things saying a scientist, does this an engineer does that. When we figured out who’s working on what it’s like, well, it makes sense for more of the engineers to take on this problem because it’s more engineering heavy. Well, there’s some science involved with it, so they need to make sure that they work with this person, either as a mentor or they’re collaborating on it, whatever structure makes sense at that point in time, but then ownership may exist and the lines of what’s engineering, what’s science? That’s blurry.
Also even folks that have DevOps kind of roles, they train models as well. I think that’s super important. People evolve and they change, right? We have folks that started out as analysts and we’ve seen them go both into the scientist track and engineer track based on the interests. And these interests evolve over time as they discover more of what they enjoy doing and what they’re good at and what they’re not. I think that’s a journey everyone goes through, but then the organization structure I think needs to be able to support these journeys and discoveries while ensuring quality in terms of the product we deliver. So on a high level, maintain a center of excellence around our field, our north star, which is the research that comes out in this industry. Or a thing that may be applicable to the industry that’s vaguely related, right?
Make sure there’s enough interest and enough of collaboration and an environment where people can actually take the time to study or research, but then set it up in terms of ways where people are accountable for the end-to-end delivery of whatever problem they’re working on. That allows them to be more engaged in the problem, understand the domain better, and feel empowered. Because it’s really gratifying to productionalize something and see users interact with it.
And then measure the results, right?
And measure the results.
For each of these teams, do you goal them on a particular metric? Some of the metrics that we were talking about before, is that something that they have a dashboard you’re looking at and reviewing with them on a regular basis?
Absolutely. I wouldn’t review with them on a regular basis that way, but they’re expected to have one, they’re expected to know what their KPIs are and they’re expected to define each problem. They’re expected to translate each problem they’re solving into a KPI that’s measurable. That’s either a primary KPI or a secondary KPI. And that is business- and user-focused. ,I talked about offline metrics and online metrics and a lot of the times the science happens with the offline metrics, but the accountability on the team or the scientist level is still for the production KPI. They’re being measured on business outcomes that way. I think that helps shape a lot of the thinking towards what’s good for the customer, what can I do that is actually going to have a significant effect?
Lots of times, I think we run into the problem of trying to apply fancy methods for the sake of fanciness. And I found this shift towards ownership as the way you look at your work. So one of our principles, and we borrowed this from Airbnb, is owning our impact. By owning your impact, you no longer focus on what model or what you’re doing. All of those are means to the end and the end is affecting the business outcomes. I think that allows our scientists to understand the space and the environment they operate in the business a whole lot better. And I think, ultimately, leads to much better engagement as well.
I imagine you have business sponsors on your side, there’s a budget that you have and being able to justify that budget by having metrics that can be understood by your business counterparts is critical. Making sure that your data science team is speaking that same language. It’s easy to justify your existence and your growth as a team. You mentioned, and this makes total sense, that there’s blurred lines between the ML engineering track and the ML scientist track, but there has to be some difference in your mind. At least in broad brush strokes, right? In terms of their focus, how would you describe the high-level differences between the two roles?
On a high level, the scientist is expected to come up with a reasonable model that fits the business problem well. And the engineer is responsible for the right systems that solve the business problem well. As an example, if a machine learning scientist is trying to learn some engineering, they probably won’t design the overall architecture of how the systems run, right? They’ll probably take guidance from some engineer and vice versa. If the engineer’s working on a specific modeling problem, they’ll probably be getting guidance from a scientist. But then if a scientist is working on a science problem, they’re expected to define what the right modeling approach is and then justify why they came up with these particular approaches and the right model structures, and even the metric selection. Why did you pick this particular metric to solve this problem? And then there’s a whole bunch of assumptions and stuff that go into it. And so that’s probably the major delineation in terms of the roles.
Got it. That makes perfect sense. So the ML engineer is someone who clearly helps to productionalize the model, but also can probably help out with training in terms of getting the data to the ML scientist who’s a bit more focused on the experimentation of trying different modeling algorithms to try to optimize for whatever KPI that team is working on. One quick question, so there’s those two roles, but then are there any other roles within these product teams? Do you have a product manager or anything like that?
Absolutely, we do. And it was something I was very set on from the get-go. And the best product managers I’ve worked with didn’t really want to be product managers.
Why do you say that?
Well, a few come to mind. The algorithms product managers usually have an analytics background, they used to be analysts solving certain functions. But the good ones, they can speak really well. By speak I mean they can distill the conversation, the challenges that are coming up into a cogent formulation of the problem. They have the capability to understand complexities of what scientists and engineers are talking about, and they have an eye towards value. They know what’s going to provide value. They can look at the numbers, they can make reasonable decisions for the business as well as have a good understanding of the either technology constraints or collaboration…company challenges, financial constraints. And so I think good product managers are essential to a machine learning product. But it’s a new and growing industry.
It’s not obvious what are the right skill sets you need to be a good machine learning or algorithms product manager. You can’t find that in a book, at least not now. The way I look at it, it’s been a journey for us to figure out what the right skills are. You approach it with humility and learning and work with the team to evolve the practices, the processes, or even the product formulation that you end up with. And the good product managers they’re really part of the team. There’s a lot of trust and there’s a natural inclination towards taking on particular tasks, but the manager and the product manager I find often end up swapping roles based on whatever is going on. There might be some certain technical things that just need to get done and the manager needs to focus on that. And so the product manager steps in, or the manager sometimes represents the product manager in certain product meetings as well. I think it’s a really necessary role, but it’s probably a very misunderstood role.
A lot of times, product managers learn these concepts in terms of what product management means. And it’s very hard to apply that to machine learning because if a product manager comes with, “Yeah, I want this product and I want it done within the sprint…” You’ll probably get it, but it’s probably going to be so far away from the problem you tried to solve, that you might’ve been better off spending a month on the problem. And that’s a trade off. It’s not always obvious whether spending more time will give you a better solution or not. But there’s so much more uncertainty in building machine learning products than there is in building software. You can’t expect the same process to translate to building machine learning products. A good product manager there is one that really understands the uncertainty and works with the uncertainty, in terms of the communication, the deliverables.
There’s a lot of content out there that talks about the differences between data science, and the data science life cycle, and the traditional SDLC, the software development life cycle. With software, you have a spec, you have requirements, and you can design to the spec and as long as it meets certain SLAs, it’s good, it’s done. You can build a model, put it out there, and maybe it’s your search ranking algorithm. And it might be good one day one, but on day 50 it starts to be not so good and then it keeps getting worse, right? So you have this model decay issue and there’s a different mindset that you need to have. Taking a traditional product manager and just assuming that he or she understands that difference is probably a challenge. I imagine there’s cross training involved, right? To take maybe somebody with a traditional product management background and train them on data science, or maybe they have a data science background and you feel that they’d be better suited for that algorithms product manager role.
Do you find the most of your PMs start from the data science world and move, or do they come from the traditional PM world?
Very rarely, I think, at least for the core algorithm products. But then we actually have a program right now. Our group product manager for algorithms, he’s driving a training program for the rest of the product managers with some really cool presentations where the more algorithms product managers talk through their products, how they approach problem solving in language that’s maybe more palatable to the rest of product management. So there’s that cross training that’s absolutely essential. But what we’ve found is we still need a product manager that’s well versed and well assimilated into the machine learning team to represent that team. So going back to saying we own algorithm products, it’s like we have algorithmic products and product managers that own and represent that product.
They’re talking to other product managers, figuring out what the right collaboration is and how that product needs to evolve. But their product mindset exists both internally and externally to the problem that’s being handled. And we found that that works best. I think over time, more product managers are going to get better at understanding what they can ask the algorithmic teams to build. What’s the right problem to come to us for? What’s the right level of conversation? Oftentimes we find in software, product managers define this is the feature I want. But then in machine learning, it’s not very useful. It’s much more useful if the product manager comes with, this is the outcomes I want to effect, right? And how do you propose we go through it? There are pitfalls both ways. An engineer will over-engineer most of the time, but a really good product manager is able to make sure they distill what’s really important to be part of a product while pulling the team back when they have a tendency to over-engineer.
But the conversation happens with asking for the right metrics, the right kind of approach. So it’s just the conversations that you ended up having I think look very different in the weeds, but on a high level I think when you talk about it as a product and customer orientation and stuff, the same notions can apply.
Well, this has been absolutely fascinating, Nishan. I think you’ve certainly hit upon a strategy that makes lot of sense, and it is a bit different, right? Having more of a product-focused strategy when it comes to operationalizing machine learning within your Algorithms team and even creating a product manager type role that’s embedded within these teams is gold to me. It just makes a whole heck of a lot of sense to me. I think like you said, there’s no book on it, but maybe you could write that book.
Well, I think we have some other people that would do much better job than me!
All right. Well, we’ll let them write the book and maybe you can write the foreword to the book. How about that?
I would love that.
Well, it’s been great. Have fun on the ski slopes there in Utah. I wish you the best of luck and we’ll be signing off here until next time, everyone. Thank you very much.
Thank you Dave, it was a fun conversation.
44:16 | Episode 06 | June 08, 2021
33:37 | Episode 05 | June 01, 2021
39:31 | Episode 02 | May 11, 2021
Use another app? Just search for Data Science Leaders to subscribe.
Data Science Leaders is a podcast for data science teams that are pushing the limits of what machine learning models can do at the world’s most impactful companies.
In each episode, host Dave Cole interviews a leader in data science. We’ll discuss how to build and enable data science teams, create scalable processes, collaborate cross-functionality, communicate with business stakeholders, and more.
Our conversations will be full of real stories, breakthrough strategies, and critical insights—all data points to build your own model for enterprise data science success