
Help Me Help You: Forging Productive Partnerships with Business Stakeholders
Summary
Transcript
There’s tremendous value in pure data science research. In an enterprise context, however, it all comes down to how learnings and insights from that research can help advance business growth, customer experience, and product innovation.
Sunil Kumar Vuppala is the Director of the Global Artificial Intelligence Accelerator at Ericsson. His career journey from a researcher role to data science leadership has given him years of perspective on how ML professionals and their business side counterparts can build partnerships that pay off in both the near and long term.
In this episode, Sunil shares some of those key lessons on education, communication, and collaboration. Plus, he details a unique MLOps strategy he’s employed to address challenges with scaling model monitoring.
We discuss:
- How a research background can inform leadership style
- MLOps best practices for scale
- Forming mutually beneficial partnerships between business stakeholders and data science teams
DAVE COLE
Hello, welcome to another episode of the Data Science Leaders podcast. I’m your host Dave Cole, and today's guest is Dr. Sunil Kumar Vuppala. Sunil, how are you doing today?
DR. SUNIL KUMAR VUPPALA
I'm doing great, Dave, thanks for the opportunity.
DAVE COLE
Thanks for being here.
DR. SUNIL KUMAR VUPPALA
I’m excited to be part of this.
DAVE COLE
For our audience, Sunil is the Director of Data Science at Ericsson. He's based out of Bangalore. Today we’ll talk about Sunil's background. He started as a researcher within the data science world and has now become a data science leader. I'm very curious to hear about how that has shaped his leadership style, how he has brought in his researcher’s background and how it might have influenced the way that he manages his team.
Next, we'll discuss MLOps best practices. Sunil has some opinions there. Lastly we’ll cover a topic that’s featured a lot on this podcast, but I don’t think we can get enough of, which is dealing with expectations from our business counterparts and educating them on how business problems convert into data science, the state of models and accuracy etc.
Why don't we start, Sunil? I'd love to hear about it from the start, with your career at Infosys. You spent a number of years there, about a decade, if I recall. While you were working there, you also were working to get your PhD. To add to your accolades, you also have 40 patents to your name. Talk to us about what you were researching while at Infosys. How did you get your PhD?
DR. SUNIL KUMAR VUPPALA
Definitely. An interesting question to start with, Dave, thanks for that. Looking back, my journey with Infosys takes me back maybe 15 years ago. I was trying as a junior researcher to start with my journey there. We started with a wireless sensor network. To be very precise, I was the second member to join the team. After that, within seven to eight years, we ramped up to a 70 member team, which we used to call the convergence lab. Now the term isn’t just wireless and networks, the Internet of Things, IoT. We are seeing AI as a buzzword now, where 10-15 years ago IoT had a similar hype and cycle aspect. I have seen that complete life cycle right from the conception to how it is now.
I started with networks and then the Internet of Things during that time. My PhD journey has been complementary to that. If the topic of research is completely tangential to what you work with for close to six or seven years, it’s not easy. The PhD journey is actually assigned. Sometimes we feel as if we are solving the greatest problems in the world but, at the same time, you need to balance your personal life, professional life and the professor's expectations. It's a balance at the end. We need a lot of support from family and professors and the organization as well. In my case, what worked out as a win-win situation for me was that we were working on similar problems, not exactly the same. If it was exactly the same, there would have been IP issues, like who claims the IP and all that.
It is a complementary thing. We were solving the problems using the Internet of Things at PhD level and talking about smart energy management, how it'll be applied for a million-variable problem, theoretically. At the same time, how does learning the modeling aspects of optimization help a campus like Infosys’? We used to have a beautiful 50 acre campus. The 50 different buildings created something of a miniature city, where we were able to apply that learning. One particular chapter in my thesis is actually a real implementation of what I had proposed as part of the thesis on campus. I had management support as well. That's where I'm talking about an application-oriented situation, even though it's a journey of seven to eight years to get my PhD, because we have support.
That too, the Institute is just across my campus and office, so I can get to work in good time. Professors can walk in any time to the campus, unlike other situations where they may be in the same city, but in different directions, so it’s more challenging to work. You talk to the professor. It's not easy. Complementary to this, my professor used to be available ‘til midnight after my work was over. I used to have interesting discussions with my professor during the night, around midnight. Maybe I stayed more than 100 nights on campus. That's how it finally worked out in a complementary way at the end, when I published good work and then got granted six US patents for it. After that was when the number grew to the 40 different patents that you were talking about. Not everything came from the PhD process, but rather from my journey starting at Infosys, then Phillips. Over the years the patent volume grew to more than 40 in my name.
The starting point or initial seed began with the Internet of Things, then smart energy management. I slowly made moves toward automation and AI within InfoSys itself, where I actually worked on platform-building, right. I was possibly a key architect at that time for the information automation platform. InfoSys was launched for different customers and it gave us a good experience. I slowly transitioned from a core researcher to understand the business value. After that we continued to work on cutting edge technologies but, at the same time, figuring out what portion of that really solved business problems needed to be balanced at the end. We must remind ourselves of this all the time, as good researchers. I have good respect for the researchers when they convert their work into real business problem-solving value. Ultimately everything is measured in dollars and the patents: how much modernization we can get at the end. Right?
DAVE COLE
It’s fascinating that Infosys has created this campus. I imagine that the university where you got your PhD was just across the way. My guess is that it’s by design and not by accident. Is that right?
DR. SUNIL KUMAR VUPPALA
Very much so, because I could have chosen to do the PhD anywhere in the world. I have to be frank, I have done my Master's dissertation in Australia. I have a rotation by working in India that is taking a lot of time and effort. I decided to choose a campus close by, for a change. My professor is from MIT. He spent 20 years in the US. So, while I may not be able to do the program directly in the USA, that rich experience, from what he's able to get from there, can be imparted to our PhD level here as well.
DAVE COLE
That’s very smart. Researchers sometimes sacrifice getting a higher degree because of the same reason. If you can do that and work at the same time, with the support of your company, obviously that's a win-win. What I'm interested in, though, is that it sounded like you were diving into IoT and then into the smart energy problem. What sort of direction were you heading in? The emphasis in my mind is this large international consulting firm. How were they guiding your choices? You mentioned the dollars and cents. Is that how they guided the type of research that you were doing, to ensure that it would result in applicable business solutions that were also beneficial to them?
DR. SUNIL KUMAR VUPPALA
Interesting question, even though the core business is services or consulting. We used to have a kind of lab environment, starting just with 400-500 people. It wasn’t considered too many people. The benefit was that we were able to work with the founders, right? The extended benefit there is understanding the business: needs at a higher management level and approach for delivering on those. The second thing is meeting the CXO force of our customers, not just our own founders. Whoever visits headquarters, we interact with CXOs from Fortune 500 companies weekly. During those visits, they would meet the business people, their own strategy partners and counterparts. We often got our slot to show innovation because the company believes that innovation is the driving factor, right?
I might have met more than 100 CXOs during my journey there. I would teach automation and AI. The transition was very smooth for me, to answer what you were asking previously, because we were able to build the platforms and generate thought leadership in terms of publishing papers and patents, and adding value to the organization. CXOs question where their ROI is. Yes. They’ll buy into smart energy management if they can save a certain amount of energy but the next question was always about ROI.
“If I invest 1 million now, what can I get within 18-24 months?” That question actually struck me. It's not only about the collection of data. Internet of Things is mostly about sensing and actuation. To do the actuation part of it, we need to do a lot of analysis. That's where AI complements the work. We shifted the focus a little bit. Yes, we did good work in IoT, to collect - the data sensing part of it, but not just in energy management. We have done a health monitoring point of view or a campus monitor. There are a lot of IoT applications. We started focusing on machine learning statistics or an anomaly detection time series, depending on the real problem or statement point of it, and the value of that analysis for better decision-making for the CXO. That 18 months ROI may be very hard but we are able to convince them. It's not just only getting the return within 1.5-2 years, but also the strategic advantage if you go there.
We are able to convince them by showing them the value. We’re able to partner with them because as leaders (us being the data science leaders) we don’t only see fun in building from scratch. If something is available off the market, be smart to hook onto it. That saves a lot of time and effort as well. It has been a good experience as a researcher, partnering and working through the data of more than 60-70 use cases across healthcare, banking, manufacturing and other verticals.
DAVE COLE
I think there's a fallacy or assumption that a lot of folks make about companies like Infosys, service companies, that is, that they're merely there to augment your innovation or ideas. A lot of them have gotten to the point where they're recruiting their own innovation labs, similar to what you experienced. The solutions coming out of those labs can get you started at 60% of the way there already and can be applied uniquely to your own world. Creating and fostering that sort of innovation is really important for attracting talent, right, and keeping people like you, who want to go on and get your PhD. How did that experience shape you as a data science leader today? How is your team organized? I'm very curious to see that from your perspective.
DR. SUNIL KUMAR VUPPALA
Definitely. So the role I'm playing as a data science leader in Ericsson, is not pure research. We do have Ericsson research separately and I am working as a director in a global AI accelerator. As the name indicates, it's a global team. We want to influence the application of AI. How do you accelerate the penetration of AI across the organization and to the customers? I have my team members; 15-30 people, depending on the last three years. It's not consistent though. Even if I take an average of 20 people with me, at least four or five will be holding or pursuing PhDs. It's actually a mix of data scientists, data engineers, and MLEs. Machine learning engineering is an upcoming thread or title for the full stack kind of a data scientist. The fourth one we have is called a technical product manager.
What we mean with that fourth title is that the domain experience needs to be tied in. It’s not possible for me to demand 20 data scientists or data engineers with rich experience in telecoms or, for that matter, any domain I work with, like healthcare. As we speak, in 2022, we can't get ‘complete package’ data scientists with domain experience plus being well-versed in concepts and programming. Somewhere we may need to compromise, maybe five years down the line. You can get enough people from your domain to fill in but due to the skill set demand we may need to augment from different directions. We do have one set of people who have rich experience, with the drive and curiosity to learn AI. Other people from different organizations come in, some with good research backgrounds. The common thing is they're good data scientists or data engineers with good programming language and concepts.
This is how we are able to build the team. Overall we have more than 300 data scientists and engineers across the globe, right. You also asked me how my experience so far as a researcher helped me in shaping my team, getting them excited for their work. I could see two reasons as a data sciences leader, ways that I follow up for their better career opportunities. It's not just at the end, getting the team members together, needing to excite them with challenging work and motivating them for their career aspirations, right? You have to keep in mind the approach: what we are following, especially in terms of understanding the map; that we are really solving the problem and then converting the business problem to the data science problem. We are able to give them a helping hand in the beginning.
The senior members in the team are able to take some of the load from us. That bandwidth for me is going to help me to take up new tasks as well, right? So it's always about the team and a focus on what we are building. We can't micromanage each and every person. There are 20 people working with me but at the same time there are different layers, like seniors. It's a flat organization so in that way everybody may be reporting to me. I have interns beyond these 20 people, experiencing an enrollment into the organization, knowing the core skill points. We are creating a safety net for these people, the data scientists, so that whenever there is help needed, we are there. Priority order also counts. For some projects, let’s say I’m handling 10 deliveries, it doesn't mean that all of them are equally important.
We are able to prioritize depending on the people who have the magic skill at all prioritization levels, in terms of business impact. Prioritization is just one part. The second thing is that you may need to deep-dive into the math level. The research background is really able to help them with that. Thought leadership is also important because we don't want to do only the projects. We don’t think that this year we have done 50 projects so that next year the target is ‘x’ plus 10. It doesn’t work that way. How many reusable components are we able to create on the way, so that what we are solving in 2022 can be optimized and automated so that we don't want to solve the same problems maybe two years down the line. We want to jump forward those solutions by building some frameworks, platforms and reusable components: common assets is the term we use internally. Can we give it to the business so that they can solve those problems, and the complex problems we will take up?
Throughout all these aspects, the research experience is really helping in being a role model for my team at the same time as creating the setting for them. Yes, if they need a deep dive, we have that kind of flexibility. If they're ready to make their decisions, there’s always data. Data science is a trade off, right? When we are building the model, it's always a case of ‘if I have one more month, maybe I can build another better,’ but does it mean that it can work for anything for its lifetime? No. Within the constraints of time and data challenges, a lot of privacy challenges, sometimes we demand of the business people. We need at least one to two months of data to start with.
If I wait for that, I lose precious time, so I need to start somewhere and then motivate the people saying that they need to start. Even with the small setup data, let the data keep coming, but look at what else we can do within those parameters, to prepare ourselves. These are the real-life challenges we are able to face when the industry doesn’t have direct answers for things. We are just learning as the industry is also moving forward. So the experience which I have, over maybe the last 10 years in this particular field and maybe 15 years into the research, is really helping me address this problem in a more systematic way. I’m working in alignment with the data science life cycle. I'm not able to hook in as a typical software engineer, where the difference is. Those are the points which I see.
DAVE COLE
I'm hearing you say that your background is as a researcher and that you have some members of your team who also have similar backgrounds; it makes you think a little bit more for the long term and systematically, right? The models that you're building today, you won't be creating two years down the line. What have you done? Is there tooling? Are there roles? Is there a process? What have you done to ensure that doesn't happen?
DR. SUNIL KUMAR VUPPALA
My observations also stem from within the industry, not just within Ericsson. I worked in both service companies and product organizations. I built at least four different platforms working on 100 different use cases of the AI, maybe 10-20% of them in deployment. The aim is that something is really working.
It's not like the end goal is only the deployment, right? To answer your point specifically, the world is moving towards platforms and solutions, whether it's a product or platform organization. You can see Google, certainly Microsoft, and others around the world also producing some kind of platform or framework. These are the buzzwords people use now, especially in AI and data science. There are different platforms and frameworks which are available. What we are doing in our journey is we are not talking about something, like a monolithic big platform we will pay for even two or three years down the line. We take the bottom-up approach in terms of solving interesting business problems and extracting out commonalities. There are some platform teams who are helping rebuild the development and deployment platform. To complement that, whether in terms of transfer learning of the models or some kind of a data quality check. We are talking about a huge amount of data. Can we take some kind of calculations of KPIs (Key Performance Indicators)?
Whether it is a classification model or some kind of NLP (natural language processing) module or computer vision, we have projects across the spectrum. Can we take out some commonalities and then build the reusable components of frameworks? Next time, whether our team or our business partners are working on it, they won’t start from scratch, right? That's the philosophy we work on and are able to build a lot of these things out because of.
DAVE COLE
Cool. That makes a lot of sense. Let's move on and talk a little bit about MLOps. Now that your team has built these models and used reusable components that you've built internally; now you're ready to put these models into production; what best practices do you have? What is your philosophy when it comes to putting a model in production and then monitoring it after it's been in production?
DR. SUNIL KUMAR VUPPALA
That's the actual need of the hour. I still recall 2019, just before this COVID scenario, at one of the keynote addresses I was talking about whether we are ready for AI DevOps. That was the title. At that time, AIOps, MLOps, there was no standard definition. Slowly in 2020, 2021, people started using MLOps as a standard. Now everybody understands, right? In the last few years, the major development point of it is one, that people are understanding the drift, right? So the concept rate model drift, I'll go a little deeper into that, but to quickly explain, we have a real challenge. Just before COVID spread across the world, we worked on a project where we were able to build models to the accepted accuracy without going to the numbers though. When we were ready to deploy it, the patterns completely changed because of COVID. People are not traveling in the same location or direction or maybe the office environment has changed.
People have started working from home. You can understand the 4G network. I'm not talking about 4G implementation at that time. The 4G network that we had wasn’t delivering the designed results during the test phase, or maybe when we were deploying it for the first time. The same data that we had from the same location, one of the busiest locations in India, was working fine as per the numbers, right? What is really happening there is the data distribution changed the scenario, which was never available in the historical data. We need to be ready for that in the real situation. That's a classic example of MLOps apps, where with retraining we should be ready with the what, when and how. The buzzword we use is something like model drift, right? Concept drift.
If you closely observe, model drift means the relation between target and independent variables change over time. Concept drift refers to the statistical properties of the target variable itself changing, right? Data drift means the distribution data, used to do this prediction, is changing. It’s almost slightly different. We need to understand whether the data distribution is changing. Maybe it’s there but the mapping to the variable is changing. Once you understand the nitty gritties of it, we can suggest variations: shutdown drift, graduate drift, seasonal drift — these are based on the type of data, right? So we suggest actually, once you identify the type of drift, like whether online learning is a better approach or some kind of feature dropping, because the changes are happening in the data and the feature dropping or model waiting, these kinds of proposals, we suggest that the business people also keep the expectations, right?
It's not like if I get another one month, I may not be able to build a better model than this. Maybe I can improve some accuracy, but at the same time, I can't guarantee that it'll work forever. Nobody in the world can do that. But at the same time, you have prepared ahead and freed up a lot of MLOps time. I'm not advocating for a particular platform here but there are open source platforms. We have some customized versions of that as well, which we give to our business partners and other customers. They can detect the drift long before the actual model execution happens so that we are not wasting that effort as well. If the distribution is so deviated from the original one, what is the point of running that particular model and seeing whether that is a valid result or not, right?
So that kind of framework, we are able to give to them. Some of them are still in review mode. Even though it is one year into production, there’s a lot of learning left. We’re still not able to pinpoint the kind of method. Yes, we identified the drift or the particular drift, but that's not like a scientific method. It’s not outlined that if you have this issue, go do this. If it is at a scientific level, it is easy to build some kind of auto MLOps models. There’s also a lot of context in that we may not be able to build the auto MLOps yet, but the world is going towards that. How much automated ML can we make in this particular process, when the system can automatically detect a drift, to do a retraining? While doing the retraining, can I use only the last few hours of data? A few days of data? Or only some specific classes of data?
There is a lot of context which we need to consider. Slowly down the line I'm sure there will be a lot of off-the-shelf solutions. Our hyperscale providers also can come up with mechanisms so that it'll be easy for the customers. Whether it's telecom or beyond that they can apply it, they can use these kinds of techniques to improve model management, right? Once the model is built, it requires a lot of effort and it doesn't require a data scientist, but maybe some kind of auto ML thing. If it is really deviating, where we need to change the feature engineering, then there’s definitely a data scientist required. For other things it can be automated into that particular cycle between data science and auto training. Online retraining and that kind of thing can help in that process.
DAVE COLE
There's a lot to unpack here. It sounds like you've been dealing with this challenge for a while. You are sort of empowering the users of the model, on the business side, to actually be a partner when they see the drift. Maybe they, themselves, determine that the model needs to simply be retrained or that the drift is so bad that there's an additional data element that might improve the accuracy of the model. In that case, they would go back to you and your team and say, "Hey, I think you might want to add this new data element in and sort of build the model from scratch."
That's very interesting, because I think most data science teams take ownership of the monitoring of the models, as well as determination as to whether or not to retrain. I think you’re empowering your business users who have the context, right. They know when there's a new release or product that has gone out that has maybe a new data element that they're collecting. It's very interesting and novel. Did you have any challenges in working with your business counterparts to educate them in this area?
DR. SUNIL KUMAR VUPPALA
Definitely. If at any time my team is developing some model, maybe, yeah. The ownership applies with us, we are not escaping from that. At the same time, how do you really scale it up? I may have 20 people or maybe our team has 300 people. We are not sure if we are growing, just because we might scale from 300 to 3,000 people, maybe, down the line. Instead we are trying to get maybe 10% of the business workforce to use data science in the next three to four years. Scaling means that it is becoming very democratized across the organizations. We are seeing AI democratization. There may not be seasoned data scientists or expert researchers, but if we’re able to use some kind of model, at least understanding the value of it and able to report on issues coming in, a junior data scientist is also good enough to maintain those models.
That will be the most scalable approach. What we have seen at the same time, when you are talking about the challenges, is that there will always be initial hiccups, right? For small issues, they’ll sometimes keep pinging us even if there is good documentation and coding standards in place. Those should be the default references. If we are not doing that, it'll backfire on us because we haven’t delivered a solid knowledge transfer. It'll be difficult to maintain and escalate further. After ensuring that coding standards are followed or maybe that there is enough documentation, there are video recordings: how do you fix it? At the same time, we need to be available for them when there is a serious need. Initially there will be a lot of push back in terms of small issues. Maybe the model is not working easily. Okay. The model is not working.
So the model is not giving the desired result. These are the common hiccups, which we need to face. Slowly when we are understanding the root cause, it is usually some kind of retraining required. Other times, the data is not coming in the expected format. It is failing at that level. For them on the business side, they won't understand whether it's failing at a data level or at a model level, or maybe once everything is done, even the visualization point of it. But we need to take it to a different level. It is very important for them to really work it up and maintain that.
It is also not easy to convince them. Yes, you worked for six months on the model. We provided everything to you but you’re still saying we can't build a robust model, which can work in all the scenarios. Business expectations, right. Is it clear? Right. Okay. I'm giving you six months of time. I will give you one more month. Okay. Can you give me a model which can work in all scenarios? The world is still not prepared for it. Slowly they are coming to understand the reality of it while we are progressing. That requires some kind of organizational push as well.
We do some training and there are a lot of options available for people to at least understand the basics. We don't require everybody to code or go into the math data but, as a business leader, awareness of the data science language is very important. It’s the same for their customers as well. Otherwise, there are situations for us where they've over-committed and it becomes a big headache for us to meet their timelines and deadlines and find an operator, right? So those challenges are always there in the business.
DAVE COLE
We segue nicely into our last topic here. What advice would you give other data science leaders? What do you think is important for your business counterparts to understand about the data science life cycle, and building models, that has helped you manage expectations? What specifically does that training involve, to get a business user up to speed?
DR. SUNIL KUMAR VUPPALA
Watch your cycle, as we call it. If you have good data, you can build a good product. To get good data, we need to have good users. That’s known to people as a virtuous AI cycle. It’s similar in enrollment in the organization. The business is trying to understand if it can solve this problem and convert the business problem to a data science problem. They need to understand the very basics of what I can do or what data science can do, and what it cannot do. It's not like a hammer falling each time, right? If somehow a competitor or somebody else has done it can be a good use case for me.
They need to be realistic. What are the pain points for their business, at an organization level or their unit level? Then come down to determine whether AI or data science is a good fit for that kind of a problem. That requires a business mindset along with an understanding of data science fundamentals. There are a lot of courses and training available internally and externally. That's the first thing we are recommending a data science leader to be part of, so that the business partners or other business leaders speak the same language as our data science leaders. That's the first part. The second thing is in terms of the team, right? Data science leadership is within the business unit, or it's vertical or horizontal with different challenges in terms of ownership, how you speak and where the responsibility stops.
If it is a vertical level, maybe you are responsible right from the conceptualization until the development and even to the deployment. If it is horizontal, maybe you are talking about development and maybe initial deployment, but giving the button to the other business partners as well. That's the second way to look at it. The third one is attracting the right talent and then retaining it. We don't require a stellar researcher for each and every problem. It should be a good mix to understand the real business problems, who understands the domain at the same time; a good mix of those people is really helpful for the organization or unit, especially data science leaders. It's a trade-off of what value we are bringing to the business, by spending on the infrastructure or on the people, right?
What matters is that initially you may get a lot of support from the organization, but two years down the line the dollar value is what we are responsible for, right? That’s leveraging the real business problems: showing the impact of it and exciting the team about the deployment six months after it happened. This is the business impact we're able to create. That feedback loop is also important so that we are picking up the right business problems, not just by our inclusion but that the real data speaks for it. At the same time we must ask if we’re building fair, robust and trustworthy models. It is the responsibility of the data science team altogether. Those are the challenges right now. Maybe technically we have a lot of algorithms available, but when it comes to the data, I may not be able to access the data.
It may take a couple of months to really get it passed through all these privacy rules of different countries and organizations. We need to deal with those challenges effectively and come up with better approaches, right? They may have a private cloud environment or some federated learning approaches. Necessity is the mother of invention, right? Standard methods may not work. We need to come up with a different distributed kind of ML. We never talked about this a few years ago. Now we are talking about federated learning, distributed learning or real-time touchpoints and need to integrate with the cloud. What will be the powerful combination of cloud and edge? These are the things which are now the challenges in front of data science leaders.
DAVE COLE
Great. Well, Sunil, I learned a lot today. I think your background is fascinating, including how you went from a researcher to a data science leader. You're working with your business counterparts today to get those models into production. It sounds like there's a mix of educating them as well as actually making sure that your team is delivering high-value models. Thanks so much for joining the Data Science Leaders podcast. If people want to reach out to you, can they hit you up on LinkedIn?
DR. SUNIL KUMAR VUPPALA
Definitely. I'm very active on LinkedIn. With due respect to the IEEE organization, I believe in AI democratization. That's where my weekends go. I keep enjoying delivering lectures or participating in these kinds of podcasts. That is the way to educate. Sharing our knowledge is a lot of fun, right? Thanks for the opportunity.
DAVE COLE
You bet. Thank you very much.
New episodes
What It Takes to Productize Next-Gen AI on a Global Scale
Change Management Strategies for Data & Analytics Transformations
A Hybrid Approach to Accelerating the Model Lifecycle
Listen how you want
Use another app? Just search for Data Science Leaders to subscribe.About the show
Data Science Leaders is a podcast for data science teams that are pushing the limits of what machine learning models can do at the world’s most impactful companies.
In each episode, host Dave Cole interviews a leader in data science. We’ll discuss how to build and enable data science teams, create scalable processes, collaborate cross-functionality, communicate with business stakeholders, and more.
Our conversations will be full of real stories, breakthrough strategies, and critical insights—all data points to build your own model for enterprise data science success.