Data Science Leaders | Episode 28 | 37:09 | November 16, 2021
How a Centralized Data Science “Nerve Center” Can Power Global Impact
Get new episodes in your inbox
powered by Sounder
There are many ways to structure a data science function in a global enterprise. But what’s been the winning strategy for global technology distributor Ingram Micro? Creating a data science “nerve center.”
Centralizing data science talent has helped elevate analytics at Ingram Micro to better solve complex business problems using machine learning and AI.
In this episode, Tim Suhling, VP Global Business Intelligence at Ingram Micro, explains how it all happened, and what data science leaders everywhere can learn from the transformation. Plus, he shares his perspective on how data science can impact “Customer 360” programs and different approaches to measuring the success of models.
- The relationship between data science and business intelligence
- Embarking on a customer 360 initiative
- Measuring the effectiveness of data science
Hello! Welcome to another episode of the Data Science Leaders podcast. I’m your host, Dave Cole. Today, we have Tim Suhling. Tim, welcome to the Data Science Leaders podcast.
Thank you very much, Dave. Looking forward to being here.
Tim is the VP of Global Business Intelligence at Ingram Micro. Business intelligence and data science are not the same thing. Today that’s one of the things that we're going to dive into, head first, trying to understand the symbiotic or complementary relationship between data science and BI. I’d love to see how you're doing that at Ingram Micro, Tim.
The next thing we're going to talk about is the customer 360 initiative that you all are embarking on. If I had a penny for every customer 360 initiative that I saw at companies, I'd be a very wealthy man. But there's a reason for that, right? There's a reason that folks or companies are really trying to better understand their customers.
Last but not least, we're going to talk about measuring data science. On the Data Science Leaders podcast we've talked about that a lot. I think it's really important to get different perspectives on how companies can measure the effectiveness of their data science initiatives.
Let's go ahead and dive right in, Tim. Let's talk about what you're doing at Ingram Micro. I want to understand your role and how you see BI helping data science teams.
Sure. Great segue. Let me start out with a bit of a history lesson on Ingram Micro. First, just for those who are listening in, we are a technology distributor. This means that we distribute technology to customers who do installations and manage services for end users, which are the customers at the end of the value proposition. Ingram Micro has been in business for 40 years. Today we are the world's largest distributor. We have a significant global footprint which creates analytical challenges.
To speak to the question of business intelligence and data science, let's go back historically and look at what we were, four to five years ago. We were a traditional analytics organization, meaning that we had analytics resources solving one-off problems. And that's a good thing, right? The organization would come to us with a varying series of questions: how do we look at customers differently; how do we understand who's going to buy from a propensity perspective; how can we look at what our warehouse optimization could and should be, meaning how many warehouses should we have open in a specific country? Those are good questions but they were problem statements. They weren't products; they were projects. We were taking on project work and delivering a set of recommendations back to the business. We were doing a really good job at that: solving real-world problems and thereby either removing costs or potentially driving additional margin through the business.
We gave ourselves a nice thumbs up and had a good amount of success. That was a lot of fun but, over time, I think we realized that we needed to become much richer in our analytic footprint. We knew that there were bigger problems that we wanted to tackle through the use of machine learning and AI.
What we did is take a step back and really look at what we wanted to be when we grew up, for lack of a better term. What did we want to be for Ingram? We looked at and assessed what our key stakeholders wanted from Ingram—if our analytic and data science teams had the world as their oyster—what would they want out of our team? At that point in time we simply restructured. We changed from a team that was in Irvine, which is our corporate headquarters, and we had a team in Chennai, India. We decided that we were going to take a localized approach and create a data science nerve center. That's really where the magic began.
You're talking about the nerve center. What do you mean by that? When I think of a nerve center, I think of a centralized location. You're the brain and then there are the nerves in all the various countries. Where did you decide to centralize data science and where were the various pockets around the globe? Talk a little bit about why you felt localization was important.
Yeah, that's perfect. After looking quite a bit at it, you think of the airplane model, right, with a hub and spoke: Delta and Atlanta, American and Dallas. We decided that we could bring all of our talent together in one centralized location—our data science talent, to be specific.
After a bit of surveying and site visits, we decided that Toronto was really a good location to house our data science team for a variety of reasons. One: you have a concentrated amount of industry. The majority of Canadian businesses run out of Toronto. Additionally, there is a wealth of great universities to begin recruiting from and bringing folks onto our data science organization.
What you think of as a nerve center, what does that mean? It really means that we had a deep bench of talent who could all collaborate together. Of course, I'm speaking pre-COVID and I'll get to the COVID way in which we work, in a moment.
What we were doing was putting our team in one centralized location in downtown Toronto, a phenomenal office location. We were able to then whiteboard ideas, work collaboratively, hand off projects, and look to hire individuals with very diverse backgrounds. I think most have realized that diversity is king and trumps pretty much everything else in the world in which we live today. We were hiring folks with 20 years of experience as well as those who were fresh out of school; those who had varying backgrounds, whether it be specifically in operations or advanced AI or facial recognition. We would span the gamut and ideally have each of our individuals cross-train each other, to become a more robust and self-serving organization with multiple touchpoints to deliver responses to organizational requests.
That became a bit of our nerve center and it worked extraordinarily well. It created that hub-and-spoke concept where we created at the same time—and this is part of the organizational design—teams that would service both the Asia-Pacific region and our European business.
Why is that important for Ingram? We divide our organization into three regions: the Americas, Asia-Pacific, which also includes Australia and New Zealand, and then our EMEA region of the Middle East and Europe. Our core functions reside in Western Europe. The idea was that we would put an office in each of those locations, which would be our core analytic team. Getting back to the idea of business intelligence, there would be analytics supported by deep data science.
In those regional hubs, what’s key for Europe is in Barcelona. For the Asian-Pacific region, we flex between Mumbai and Chennai, both locations in India. The idea was that they would be our frontline associates, working very closely with individual countries, stakeholders, vendors (manufacturers such as Lenovo, HPI, HPE, Cisco etc.) in those individual regions to begin taking an analytic approach. When they got to a problem that needed deep data science, what they'd be doing is tapping back into our team in Toronto.
It became a handoff and collaborative relationship. We knew that we could take certain projects to full completion in some of our hubs, but also we needed that deeper bench, depending on the sophistication of that activity in which case, we could go back to that nerve center in Toronto.
It became a model that simply ran on its own and has done so during the last two years of lockdown. It has become, in my opinion, something that's easily replicable by other firms and a model which generates synergies. You're able to be closer and more intimate with your customers, whoever they may be in your regions, but also be able to tap back into a corporate entity that can support specific questions and problems.
If I were to summarize, it sounds like the nerve center in Toronto houses the data scientists who are actually building the models. The folks who are in the spoke are out in India, your APAC region, and other areas. They're the ones who are collecting the requirements and helping to translate some of the models, getting them closer to the end user, so-to-speak. Is that right?
What is interesting to me is the makeup of the people that you hired in these regional offices. Folks who had some data science acumen—did they come from a BI background? Did you train them? I would want my eyes and ears, if I'm in Toronto, to be people who really understood data science, not just understanding the local challenges and issues.
The profile of the associates we hired in the regional hubs were individuals with data science or analytic backgrounds. They had to have that core fundamental understanding, whether it be through past work experience or their educational training. Those individuals, generally speaking, were very curious about the idea of working in conjunction with a sales rep, right, with a leader in a specific country. They very much wanted and had that aptitude to dive into the business, get their hands dirty with an individual country, understand what those root causes were, build out requirements, take the analytics to a certain level before a handoff. They also had that frontline persona, which is something that not everybody really aspires towards.
They had the ability to translate that need back to our hub, our nerve center in Toronto, to drive those long-term or sustainable model builds and then results that are taken from those models and then implemented back in the field. Like I mentioned, it becomes a handoff with frontline associates working very closely with the business, then translating that back to our data science team.
The idea of data science being centrally located is that you get the benefits of collaboration, like real-world in-the-office whiteboarding. Hopefully one day we'll go back to the office like we did a couple years ago, like we were obviously pre-COVID. That makes a lot of sense to me.
I want to dig in a little bit on the localization aspects. I don't know too much about your business but it seems like if I'm selling technology, like laptops, to large corporations that it's different from one country to the next. How different can it be? Can you give me some examples of how that localization has paid off? You could just as easily have folks in the United States who work with these various BUs and leaders, but you decided not to. Help me understand. Maybe you have some stories there.
It's interesting, right? Pre-Ingram, I came from a CPG background. The way we looked at a product was: a product is a product, right? Every product has certain attributes associated with it. Whether you're selling bananas or cereal, ultimately, it's the same from an analytic perspective but not, obviously, from a sales motion perspective.
The reason for hyper-localization in Ingram is that we carry a different line card in each country. That's an important point. France is different from Spain, which is different from Italy. This is the makeup of our business, our go-to market strategies, because Ingram was built through a series of acquisitions, like most multinationals. The behavior or presence of a company that we acquired was unique to the country next door.
We know that we needed to be specifically localized. Specifically in Europe, with the advent of GDPR data use and disclosure laws, we needed to ensure that our work was started, completed and terminated within the EU. It’s a very critical nature of being localized, in addition to dealing with the language intricacies and other items I mentioned previously around line cards and paths to market. This is all centered around the customer profile.
In some countries, Ingram might have an entire retail business where the majority of it sells into traditional retailers, similar to a Best Buy. In other countries we may sell directly to more customers and be heavier on that side of the equation. This changes the inputs into the models, which we're driving, as well as the conversations with those local resources. Localization follows Ingram's strategy around being the best in the market around a specific idea, category and path to market.
Our goal as an organization has always been to be the number one or two player in each country that we're in. We want to ensure that we're not always a one-size-fits-all business, and have the ability to really customize our approach when meeting our customers’ needs. Analytics need to follow that same approach in order to be successful.
That makes a lot of sense. You mentioned customers a few times. I wanted to dive into the customer 360 initiative. I made a joke at the outset. I do think it's important. I've also heard folks sort of calling it KYC, “Know Your Customer.” I personally think that not every company does this the right way. What are some of your lessons learned during a customer 360 initiative at Ingram?
It's interesting. I’ve been here for about four and a half years. When I started, I sat down with one of our executives in the C-Suite. He made a comment that every one of our customers were B2B and knows their end user perfectly. One of his first visits out to a customer, he brought along a bit of an understanding of what that customer's total business was: who else or other customers that sold to that end user within our profile, if that makes sense.
He blew the socks off the individual he was meeting because that person realized that they only sold about 50% to that end user, meaning there was another 50% share that they were completely unaware of. That really struck a chord with me. We all knew that Ingram may have 25, 50, 75% share of the total addressable market for a customer, but we were always wondering what that other percentage was, right? It's a pie game. You know you have a certain percentage of the pie. We all know that there's leakage. Anybody who thinks that there's no leakage is blind to reality.
By leakage, I think you mean competitors, right?
Right. Competitors. Good call out. The beauty of being in a good competitive environment is it makes us intensely focused on becoming better, ourselves. Competition is a beautiful thing, right? We started to look at and dive deeper into this idea of what we were leaking and what is going to other distributors.
We found that we could create a series of models in conjunction with a series of third-party data sets, that would allow us to understand what our total addressable market is, by customer. If we're selling 50%, we are now able to identify what the additional 50% is.
From a product standpoint, we can't necessarily identify where it's being purchased, but we know what that product set is. That was done through a deep learning algorithm. We have the ability to monitor the behavior of a specific customer over a period of time and, then based upon that behavioral profile, some additional sets of models against it. Then we begin looking at what we believe is being purchased outside of our purview. This is interesting and I think it's a capability that most organizations can implement with the right amount of time and investment into understanding what that periphery looks like.
What becomes more interesting though, in my opinion, is how you activate it. Every analytic organization on the globe, in my opinion, faces the same challenge. It's activation, whether you're a consultant, building models for an individual company, or an in-house organization. It's the activation. We all depend at some point in time on an operations or sales associate, whatever it may be, to take those learnings to market and begin changing behavior.
Change management is the most difficult part of the customer 360 initiative. That goes without saying. That is the largest fact associated with what we do from a data science perspective: it's change management. It's getting an associate who has been doing a certain set of activities for 15 years successfully, and asking them to change their behavior, to look at an opportunity a bit differently.
Through the support of our executive leadership, and solutions such as Microsoft Dynamics, our CRM solution, we've been able to pump leads directly to our sales teams for each customer around the globe.
Today we have a significant number of customers from a B2B standpoint: in the hundreds of thousands. We’re able to profile all of them and their complete 360° view of where our opportunity lies. That gives us a unique, competitive advantage, right? It allows us to have differentiated conversations. We're not going to win every deal because we have an understanding that that business is leaking elsewhere. What we will do is learn more about our customers and how we can better service them in the future.
On top of this activity of putting leads into our CRM system, which our sales reps then begin to have conversations with our customers on, we then begin text mining and putting AI against the text. From there, we're starting to pick out key themes, right? Think of your basic word cloud, but it's a little bit more complex than that. We're able to pick out key themes on why we're not winning that business.
What you're able to do then is go back and retool your sales and services strategy, and think differently about your credit offerings. Ultimately, the next time we have that conversation with that next customer with a similar profile, we can change it to meet their needs because we know what the market is telling us from a feedback perspective.
We've created a closed-loop cycle: from the analytic impetus of a model, showing us what our opportunity is; taking the learnings back full circle from our customers whom we speak to, about that particular model; changing behavior as we move through the second time around the circle, to really have a more detailed proactive discussion with our customers.
Lots of circles and 360 analogies here. Let me dive in, Tim, and try to summarize here. It sounds like the first step in the customer 360 was better understanding the market share that you have with each of those customers. If you have a customer who's buying products and services from Ingram, what percentage of the total TAM that you have with that customer is with you, right, with Ingram. Say that's 40%.
It sounds like part of the approach was to look at third-party data, obviously also to look at the behaviors and the interactions that you have with the customer on that 40% side. Using some third-party data, you figure out the additional 60%. Is it actually 60%? Do you have 40% market share, 50% etc.? Is that part of the story?
Very much. Third-party data in conjunction with internal data science.
Once you have that understanding of your market share, you did mention informing the folks who're on the sales side, better instructing them. Is it something along the lines of, "Hey, if we have large opportunities, we have very small market share; well below where we are with most of our customers in that region or area. These are low-hanging fruit. There should be no reason why we can't increase our market share." Is it kind of like that, or is that too simplified?
It's a little bit simplified. What we tried to do is then put lead scoring in place, right? So we lead-scored every customer based upon a vendor that we wanted to actually sell to that particular customer or work with that customer later on. You receive a score of 1, 2, 3—the list goes down, from a one being the highest lead to a 10 being the lowest. We spent our time focused on ones or twos.
Therefore, you could know with confidence that you are having your sales team call out against the biggest opportunities, which, therefore, yields the biggest benefit. It wasn't a haphazard approach. It was a very specific and targeted approach based upon analytic profiles of these specific customers’ data.
Got it. So the text mining comes. You have the sales individual or account executive actually going out and selling. It's over email, I imagine, but a good portion of it is audio. Did you have any audio to text? Were you recording the conversations and then using that to mine? Is that also part of it?
Exactly. We at Ingram do have the ability, and we do, say, record all of our phone calls. So we're able to then take those key learnings, take those keywords, phrases, comments, feedback, and then have that create a bit of a word cloud or a bit of an opportunity assessment of what we need to change in our sales pitch.
There could just be a sales motion or within our analytic approach. If any data scientists or team tells you that they are 100% accurate 100% of the time, they're not.
There should be times that we were off the mark. We made a calculation error, right? When you're trying to profile hundreds of thousands of customers across all levels of technology, you're going to have opportunities that are misses. It's really important to say that. Anybody who says, without a bit of humility, that they're dead on is probably drinking their own Kool-Aid.
We know that we're not perfect, but what we have given is a directional indicator, which then changes the conversation with our customers. And what you find, most customers, most human beings, they just want to tell their story, and they want to talk. We've opened up a new avenue of conversation, which inherently then ties us closer to the customer.
Everything is around customer intimacy and experience. We've been talking at Ingram about those two topics for the last two years, investing significantly in those two areas. From an analytic perspective, we can start a new conversation, a new line of communication. We've now increased intimacy, and we've increased that experience, which inherently, even that in and of itself, will create a stickiness that is non-transferable to another distributor because we inherently won at the experience, just through those analytic profiles. Even if we weren't able to convert the sale, we still did a good deed for Ingram, and we're able to advance the business over the long term.
Just guessing here, Tim, but part of the change management challenge is you have folks in sales who've been selling a certain way for a long period of time. They then start at Ingram. Suddenly you have these analytics teams who are outlining the best way to have the conversation and who are the top leads in their region.
That person might be thinking that they know where their top customers are and that they know how to sell. Am I right on that? Is that the worst part?
You're dead on. If you think of the history of Ingram, we're a highly successful organization.
Not just through COVID, but over my tenure with the organization, we've had record results each year. If it's not broken, don't fix it. There's always that mentality. It's going to exist in every business across the globe. What we're trying to do is elevate.
It's that straightforward. You're right. We have very talented associates across the globe and who do an outstanding job and who drive the business forward on a daily basis. We're simply trying to help them take that extra step, go the extra kilometer, yield that extra benefit back to the entity.
You're sort of augmenting and helping them. You're not telling them what to do, per se. You're trying to just make them better, right, through the use of data science and analytics.
I want to switch gears yet again and dive into our last topic because everything you described is great, but if you're not measuring it, then you don't know what's working and what's not working. You also don't know the value that the team is producing. Talk a little bit about how you and your team are helping to measure this process or a number of other processes and models that might be out in production.
I think measurement is key to everything we do, right? It's how you justify your budget every year. It's how you speak to your performance. When I go to my manager I speak to the results we contributed to over the year. The way that our team looks at their levels of success. You may write the world's best algorithm but if it doesn't have implementation and you're not able to measure the results, then it was all for naught.
What we've done is we've put a specific emphasis on the measurement of activities. I'll qualify this in two different buckets. There are those items that we directly impact, and that we can directly measure, meaning we make the decisions that are ultimately implemented in an urgent and intimate manner. Then there are those activities where we ask a sales or operations team to change behaviors then drive results based upon that.
Let me talk about the first one. Those areas that we create solutions which are implemented instantaneously or near real time.
Let's think about procurement activities or stocking levels in a warehouse. We at Ingram have warehouses. It's part of our strategy. In the US, we have a multitude of warehouses. We actively play a role in stocking levels and products which should be stocked in warehouses around the globe. We make the recommendations, which are then done in conjunction with our procurement teams. Those are stocking levels which then are adhered to within a specific location.
We may set wage rates within certain locations, for example: how much an entry level employee should make; this is how much a mid-tier worker should make based upon their job grade; this is how much overtime we should be paying. After that, you receive diminishing returns. Doing those types of analyses, which are implemented in the near term, which have real-time OPEX or CAPEX implications, ultimately has a benefit. We then measure the benefit that is received based upon that.
What we're working on now, which I find really interesting going back to the direct return and measurement, is around procurement and procurement levels: what inventory should we be buying at what quantities around the globe? We have and carry a significant amount of inventory, which thereby ties up working capital and has an impact on your financial statements. In conjunction with our operations team, we are actively working on reducing that working capital footprint or, maybe not reducing in some cases, but then more accurately spending those dollars on the right inventory. It is critically important to have right on-hand inventory in our business, right? It's a lot of last-minute orders and areas that would need to be just-in-time. So those activities, we then measure and report back to our local country leads. What was the impact of this particular activity around labor, around procurement, around logistics? Those are the one-offs which enable us to have that direct conversation back with the local country chief financial officer.
Then there's the other component which we talked a bit about, around the customer 360. It's not just customer 360 that we're talking to our sales teams about. It's a whole host of initiatives. We have vendors that come to us that say, "We need to help launch a new product," or, "we need to figure out how to push this subscription model service," which are the wave of the future for technology today.
As we migrate from traditional hardware to subscription-based services, we need to understand which customers and end users have that profile in order to absorb that new consumption model.
We'll go out, profile and help our resellers pair up with end users that we identify, who should be engaged in this particular activity. Now that's all well and good. Our activities from an analytic perspective are very sound, but then again, you're asking another individual to take the activity and run with it. The measurement becomes a little bit diluted.
That goes back to what we discussed in our last conversation, which, again, I come back to you because I think it is such a critical point. That's the idea of change management. It's changing an organization's behavior so that you ultimately can drive a specific result. When we see that the activity wasn't being adhered to in the manner in which we had hoped, we then have those conversations with our sales leads and executives in those individual countries to find how we can better collaborate together.
And then, again, measure the activity. What we're doing is we're reporting back on a monthly basis, not only on those activities that we have direct impact on, but on being able to trace a lead generated by a data science associate all the way to the activation point; the conversation with the customer; potentially, the sale or the quoting of that particular solution and then the delivery to the end user.
What's important, I didn't speak too much about this earlier, is that we do have this end-to-end visibility all the way from the end user, whether it be a hospital, school, manufacturing facility, all the way to the vendor. We have the full spectrum view, which allows us to really see the recommendations that we make come to life.
Then we're able to report those back through a series of dashboards. In some cases, it's me doing just quick status calls to those local countries to ensure that we're activating and generating the responses that we want to see. If not, take a moment to course-correct and see how we can change, whether it be behavior or whether it changes the algorithm to take on a new topic, technology or opportunity that may fit the local need.
We go back to a point that you made earlier around localization, and that really speaks to that point. Each country has a different initiative and need. Each country's carrying a different focus. Well, there's obviously some similarities. There are unique one-offs around how their growth plan comes together in a certain fiscal year. That's why we really want to make sure that we're hyper-localized, delivering those analytic responses to the immediate needs of those specific countries
It sounds like there are use cases where data science is directly impacting and making decisions. Those are probably the fairly easy ones to measure, right, where you can say this is before data science where the model wasn't actually a part of the decision-making process. Today, it is this and you can see that our margins are X percent better or what have you.
There are other cases where, similar to the customers 360 conversation, data science is augmenting and helping existing teams, who then go on to make decisions. They may use the lead and be successful and they may not, but you're measuring that as well. Especially in the case where you're augmenting what a team is doing, part of that change management solution is to show the results, right?
“Now that we rolled out this new customer 360 initiative, your leads are now being scored in a different way and these are the results that we're seeing. Do you agree? Do you sort of see the same thing?"
If they do then they respond that it works great. Then I imagine that it can help win hearts and minds out in the field.
You really said it well. What we try to do in full transparency on this point is try to build our case studies first. We have this unique ability to work across 20-30 different countries but we pick a handful that we really try to test and learn, Canada being a great example, or segments within the US business.
We try certain activities that generate an uptick in sales and we build out case studies. When we're going to a new country, we can take this in a frame, out of the country perspective but looking at a localized business where you keep talking about BUs; testing and learning in a safe, secure environment and then taking that learning and then blowing it out in a much broader fashion. That's one of our routes to market that has been most successful is by having a detailed response of how a certain activity was created and then implemented in an individual location and how it has the benefit within a specific country.
The other piece that is equally important to the measurement is what we do before any activities: we predict what the results will be.
It's a unique option. If we're going to try to run an activity in a specific country, what we'll try to do is speak to the benefit that will be received through a basic forecast model. That will allow us to say that we'll be able to generate X amount of dollars, euros, pounds, pick your currency, if we were to activate this activity, which thereby justifies the labor.
Labor is always going to be king. You have a finite amount. Our sales rep has a certain amount of hours in a day. How much time do you want them to dedicate to this new project? Once we're able to then quantify what that opportunity size is, through a certain set of activities that we want to implement, we now can go back to that sales lead and present the opportunity: we believe it will generate this amount of incremental dollars; you'll need X amount of sales labor against it. It becomes a very easy ROI calculation that's based upon the earlier discussion of the case study. It's one of those things where we're getting much better with change management but, like in the organization, it's a bit of trial and error. The more facts you can present, oddly enough, the better it's received.
As you were telling that story, I was thinking to myself that maybe data science teams need a small marketing department, just to help with those case studies, getting those out and selling the vision and results.
This has been fascinating. Tim, I really appreciate you being on the Data Science Leaders podcast. We talked a bit about data science and your team; how you have the analytic arms in local markets; helping to deliver the data science that the dream team in Toronto is helping to build. We talked a bit about your approach to the customer 360. Finally we talked about measuring all of this great work.
I really enjoyed the conversation. I really appreciate you taking the time. If people want to reach out to you, can they go to your LinkedIn page and connect with you? Is there any other way to get a hold of you?
LinkedIn is the best. It's Timothy Suhling, not Tim, on LinkedIn. Feel free to reach out and I will always respond back. I make it a point to be fairly active online.
Awesome. Great. Well, thanks, Tim. I really appreciate you taking the time.
Thanks so much, Dave. Really enjoyed it and appreciate your time as well.
Listen how you wantUse another app? Just search for Data Science Leaders to subscribe.
About the show
Data Science Leaders is a podcast for data science teams that are pushing the limits of what machine learning models can do at the world’s most impactful companies.
In each episode, host Dave Cole interviews a leader in data science. We’ll discuss how to build and enable data science teams, create scalable processes, collaborate cross-functionality, communicate with business stakeholders, and more.
Our conversations will be full of real stories, breakthrough strategies, and critical insights—all data points to build your own model for enterprise data science success.