
Scaling Data Science Value with Cross-Functional Teams
Summary
Transcript
To embed models into SaaS platforms at scale, it pays to have a cross-functional team—software engineers, UX designers, data scientists, machine learning engineers—all working together.
That collaboration allows you to tackle hard challenges around scaling models to work across hundreds of thousands of customers. And it enables you to build something that offers tremendous value across many different use cases.
Jayesh Govindarajan, SVP Data Science & Engineering at Salesforce, joins the show to share how his team makes this a reality. Plus, he talks about the priceless value of customer feedback and the three areas where data science teams should focus their efforts.
We discuss:
- Arriving at data science from a pure engineering background
- Why telemetry is no substitute for customer feedback
- Tips for embedding models into a SaaS product
- The three pillars of work for a data science team
DAVE COLE
Hello, welcome to another episode of the Data Science Leaders podcast. I am your host, Dave Cole. Our guest this week is Jayesh Govindarajan. He is the SVP of Data Science and Engineering at Salesforce. Welcome to the Data Science Leaders podcast, Jayesh.
JAYESH GOVINDARAJAN
Thank you, Dave. Good to be here.
DAVE COLE
Our agenda today with Jayesh here, is to talk a little bit about his background. I think it’s unique. Jayesh, you came from an engineering background and are now a data science leader, a pure software engineer. I think you're the first Data Science Leaders podcast guest that we've had who has a pure engineering background. I want to talk a little bit about how that helped you. I'm curious to see the makeup of your team: are people in your mold or are they more like pure data scientists?
And then, of course, we're going to talk about the Einstein platform.
So, Jayesh, looking at your background, you started working at Avaya. You went on to create a startup called MinHash, which was acquired by Salesforce about five years ago. At that point, you moved into your current role, working at Salesforce and helping to build out the Einstein platform.
Before we talk about our agenda topics...help the audience, who might not be familiar with the Einstein platform. Although, hopefully, they are...
JAYESH GOVINDARAJAN
Great question. Like you said, Dave, I lead several of Einstein's efforts here at Salesforce. Einstein has two parts or components.
One is Einstein for our Cloud stack, integrated machine learning and AI components, tied deeply to use cases within sales, service, and B2B marketing. Those are our three big Clouds. There's more, as well.
In addition, I also lead teams that build foundational components such as assisting humans to complete an enterprise task, as well as fully automated self-service components such as Quartz. Think of it like a layered cake. There’s a foundational layer which cuts across many of these components, and then there are verticalized AI and ML components specific for sales professionals, service agents and marketers.
DAVE COLE
When you talk about the foundational aspects of your team, are you talking about work that is helping Salesforce, the company? Or are you talking about things that are helping out Salesforce customers?
JAYESH GOVINDARAJAN
Everything my team builds is, in essence, a product that goes out to Salesforce's customers.
DAVE COLE
Got it.
JAYESH GOVINDARAJAN
When they're performing CRM activities, our products, our AI and ML components, essentially help them complete or automate the task, or assist them with getting the job done.
DAVE COLE
I definitely want to dig into that, learn about how you go about doing it. How do you figure out what tasks to automate? How do you go about embedding into the Salesforce platform? Obviously, the Salesforce platform is massive, just in terms of number of users and active users. It boggles the mind. It must be a particular challenge to take some of these models and actually embed them in some way, shape, or form into a SaaS solution or platform of that size. I would love to pick your brain here, to help our audience learn a little bit more about how you do that at scale.
Before we dive into that, let's talk about your background. You started as a software engineer. It wasn't until your MinHash startup that, it seems to me from looking at LinkedIn and our conversations, that you really got the data science bug and moved into that area. Is that fair to say or did it start before then?
JAYESH GOVINDARAJAN
I think it started a little bit before that, but with breaks in between, focused more on engineering. The right way to characterize it is as a long meandering path into data science.
I studied computer science many, many years ago. My first exposure to providing programs that learn from data was actually building my Master's thesis—components of my Master's thesis. It involved working on geo features for search tracking algorithms. This was, of course, before Google was a thing.
Right after that, I went into a startup. As you well know, you wear all hats in a startup. I was a backend engineer there. I lost track of building components that learn from data, for a few years, while I was at the startup. I picked it back up when I joined Oracle several years later. I joined a team that was building clinical decisions support systems, which I found quite fascinating. It’s still an unsolved problem, really.
That's where I got into building models for predicting risk in provider settings, doing all kinds of number crunching for payers to optimize billing. So the entire gamut, before it was called machine learning, was statistics, right?
DAVE COLE
Yeah.
JAYESH GOVINDARAJAN
But even there, a lot of my experience eventually came in handy with building scalable end-to-end data-driven systems: wrapping analytics and statistics into a product and shipping it. That was, I'd say, my first real introduction to what was, I wouldn't say machine learning, but at least analytics and statistics, at that scale, wrapping it into a product.
After that, I had done ad tech for a startup here in the valley, called Skywriter, for about two years. Ad systems, as you well know, feature real-time complex human processing with a matching component built in for servicing the right ad. I think that was a strong learning ground for doing what came next, which was machine learning at scale.
DAVE COLE
During those last two, at Skywriter and Oracle, you were an engineer as your primary job. There were certainly some data and analytic components to it, but you were managing teams that were writing and committing code through repo and so on and so forth.
JAYESH GOVINDARAJAN
Absolutely.
DAVE COLE
It might've been some peripheral data. You were primarily an engineer, right? You had data; there was a QA team and everything else?
JAYESH GOVINDARAJAN
Yes. Building systems, getting things in production, running them at scale.
DAVE COLE
Right.
JAYESH GOVINDARAJAN
That was 70% of the job. The model was one small piece of that creation as well.
DAVE COLE
Fast forward and look at your team today, at Salesforce. Because so much of your work revolves around data science products that are embedded into this SaaS platform, I'd have to guess that it's a mixture. Your team will have a mixture of data scientists and engineers, much more so than, say, an average Fortune 500 company's data science team. Is that fair to say?
JAYESH GOVINDARAJAN
Yeah, absolutely. I think it's a very cross-functional team. We have full stack software engineers; UX folks that specialize in the front end; data scientists working on model- and algorithm-building; machine learning engineers that continually evaluate different production, really focused on getting these models into production. It's truly a cross-functional team that cuts across all areas. I feel like that’s super important for doing things at scale, especially when building AI and ML for the enterprise.
DAVE COLE
When you say ‘cross-functional team’ what do you mean by that? Is it important to have, not just data scientists, but also folks who really know how to code and embed models into a SaaS-type solution?
JAYESH GOVINDARAJAN
Yeah. Let's take the typical lifecycle of a machine learning component in production. You typically start working with a handful of customers, going deep into the problem. You don't necessarily want to build the best model right there. You just want to figure out what the bounds of the problems are. What's acceptable performance in a customer’s context?
To do that, we have people who can prototype and build the model pretty quickly. Some engineers and UX people build a prototype with customers. Once we get to a point where we feel like it's ready to build at scale, we start. We create an entire team around it, get it into production, start to ask harder questions around the model generalizing across customers, for that particular task.
It’s a pretty expensive process. Building something that goes out to hundreds and thousands of customers is an expensive process. You want to fail fast in the early phases so that, when you're ready to go, you have a full team that can build this out at scale.
DAVE COLE
Now we're getting into the process of what your team does, which is great. We're segueing into the Einstein conversation. It sounds like your process starts with your team having an idea of a new feature, or data science type of product to build and brand as some Einstein component. They start with a small handful of customers and a POV type approach. Are these customers aware of it or is there something that they see that just asks them to opt in? Is it done in partnership with them because you know that they're interested in this particular feature?
JAYESH GOVINDARAJAN
It is very much in partnership with the customers. We're very clear early on that we want to ensure that it’s a problem worth solving. This means that it’s going to either help users of this product get more efficient or, if they're not getting more efficient, help save money or grow their business in a different dimension. The value proposition needs to be pretty clear.
DAVE COLE
Right.
JAYESH GOVINDARAJAN
As you can imagine, if the value proposition isn’t clear, it's a self-selecting audience. If you keep trying something and it's not working, it's likely not a great idea that you want to pursue.
DAVE COLE
Right. Another thing you said, too, was that you're starting with a small cohort of customers. How many is that? Is that less than 10?
JAYESH GOVINDARAJAN
Less than 10, typically, yes.
DAVE COLE
You're starting with less than 10 customers, but then you have to think about whether it’s going to be useful to all 100,000+ customers that Salesforce has?
JAYESH GOVINDARAJAN
Yes.
DAVE COLE
And, to your point about your solution being generalizable and expansible to all of those 100,000 customers, or not—how do you go about making that determination?
JAYESH GOVINDARAJAN
I think that's a great question. In my experience, what works really well is understanding the bounds of what ‘good’ looks like.
If you build a model that’s trying to solve a specific problem, you're always going to have customers for whom it may not work perfectly well. Right?
One of the things that we've done quite well is design products where those failure models are taken care of, well. Models don't spit out information that's going to be completely useless over and over again. You bring a human into the loop. You have them review it if you're not sure, and then you use it for the job to be done.
This is what I mean when I say that a cross-functional team is key to this. What I just described to you, this ability to pull in a human and get their input before you tweak a model, maybe unbeknownst to them, is completely a UX challenge. Right? That's what I mean when I say that the best and most scalable solutions that we're able to put out there have that mix of great user interface and great engineering—great science. That all comes together, but you asked about generalizability, which I want to touch upon more.
DAVE COLE
Right.
JAYESH GOVINDARAJAN
Once we've had a successful pilot, it really is the beginning of the next stage of the journey: is the machine learning component that we built going to generalize across all customers? It will not, without tuning. So how can you build systems that customers can tune by themselves? What kind of training samples can they give this component so that it's a good fit for them, for their use case?
You have systems that can work great with a global model. For example, if you're trying to recommend articles, a global model will work reasonably well for that. If you're trying to help an agent close a deal, that is very specific to your company and your processes. All the data that gets collected could be very different for different companies. That's where the problems split into vertical and horizontal: building something vertical-specific or something horizontal like a chatbot which has the ability to be applied broadly.
DAVE COLE
I’d like to do a thought experiment here. I assume that there is an ability within the Salesforce platform that allows me to make predictions. For example, it allows me to understand which of my Salesforce opportunities have the highest likelihood to close in some timeframe, maybe this quarter, and things like that.
I imagine the features that go into that model, to your point, will differ from one company to the next. It's not a one-size-fits-all type of solution when you're trying to predict whether or not an opportunity will close.
When you're building this as a capability and feature within the Salesforce platform, I assume that you’re thinking, "Since we're not just building and using the same dataset every single time, we have to allow the capability to train a model based on different types of datasets.” This is for the person who is actually training this, to determine whether or not the model is going to be good and trustworthy.
Walk us through this because those are things that not every data scientist has to think about. Right? Most data scientists are working with a confined dataset. They don't have to think outside of their specific problem. They don't have to generalize it to thousands of customers.
JAYESH GOVINDARAJAN
Yeah.
DAVE COLE
Do I have that right? Did I summarize part of the challenge?
JAYESH GOVINDARAJAN
You did, yeah.
DAVE COLE
Cool. So how does your data science team go about doing that? It's almost like you're not just building models, but the capability for your Salesforce users to build models. Is that right?
JAYESH GOVINDARAJAN
That's exactly right. Let's take an example that's going to be used by our customers.
One of the things that we built and shipped, which has really taken off during the pandemic, is our chatbot engine. A lot of what chatbot engines do, not just at Salesforce, is understand conversations and take you down a certain path to get your task done. People don't talk to Salesforce chatbots for chitchat; they want to get something done and they want to get there quickly.
Now, the intents that you might need to nail so that you can send the customer conversation down a certain path is going to be different for different customers.
For example, if you're a retail customer, you might have an intent like, "What's my order status?"
DAVE COLE
Right.
JAYESH GOVINDARAJAN
If you're an IT customer, it’s a completely different intent: "Can you help me reset my password?" If you're a healthcare company, it could be getting appointments. Broadly there is this entire gamut of intents that are striped by industry. Right?
How does one build a solution that is going to cut across all of those? What we've done in that case is basically built an intent engine, which comes pre-trained based on a language model that is global in nature. And then, when a customer implements the product, we have a training game that's part of the product setup process. It essentially fine tunes the intents to the specific needs of that customer.
What's amazing to see is that now it's completely scaled out, we have thousands of customers that use it. You can see the industry specialization appear, although we never put it there. It's based on what customers fine tune. You see vocabulary for every industry, specific to what you play in.
DAVE COLE
When you say it's an intent engine, help me understand that a little better. Is it specific to the intent? If you're trying to get an order status, it shows the data that you should train it on, the flavor of the chatbot engine—is that what you mean by intent?
JAYESH GOVINDARAJAN
Think of it like a hierarchy. There's a lower level model that is pre-trained. It'll understand things like order. If you call ‘order’ something else in your industry, it will not map until you give it some more training data to make that association.
Over time, in essence, we do order management pretty well because it's part of common parlance or vocabulary that you and I use.
DAVE COLE
Right.
JAYESH GOVINDARAJAN
If the order or the concept of an order is different in a different industry, that would look different. That would need to be fine tuned especially.
DAVE COLE
Yeah, let's say instead of order status that it was a support ticket status. What's the status of my support ticket? It's sort of the same thing. You're looking up the status of a transaction so-to-speak, an interaction. The language you would use to check on it might be very different. Right?
JAYESH GOVINDARAJAN
Yeah, exactly.
DAVE COLE
Got it.
JAYESH GOVINDARAJAN
For example, when you say, "I want to reset my password," that's a very specific meaning to that. I'm running out of examples here, but you can think of reset in a context that's different from a password. You would need to know what to reset. If you implemented the solution you would know what exactly to reset.
DAVE COLE
Right. The intent might be to reset something. That something could be a password in the case of an IT vertical type of company. But it could be, I don't know, resetting my subscription or something, I don't know.
JAYESH GOVINDARAJAN
In a healthcare context you'd be resetting your elbow, as morbid as that sounds.
DAVE COLE
Yeah. It's at a joint or something. Okay. Got it. Taking a step back, one piece of advice is if you are having to create more of a generalized model that can go across industries is to look at intent. Look at building models that can be intent-specific, then allow those industries to make it their own. There might be slightly different ways in which you might check an order status between something like a retail order status versus a support ticket within an IT company. That makes a lot of sense. That's one nice interesting nugget and takeaway.
Let’s say you've determined that you want to build a chatbot. You've rolled it out for a couple of customers. It works great. Then you want to roll this more broadly out to the entire Salesforce customer base. You mentioned that, at that point, you actually build out a cross-functional team that is specific to that: UX, ML engineers, data scientists. Is there anything else that I'm missing?
JAYESH GOVINDARAJAN
That's my team, but we also have product people as part of the mix. We also have documentation specialists.
DAVE COLE
Got it. This team will focus and build. Are they taking the prototype and running with it or are they almost rewriting it from scratch? Does it depend?
JAYESH GOVINDARAJAN
It takes a while to convince people that it is a prototype. When you run a study for primetime, you have to think about doing a full rewrite on it.
DAVE COLE
So most likely, it's a full rewrite? Then they build it all out. How does it actually get rolled out? Is it all at once or does it get rolled out slowly across the customer base? Is there any A/B testing? How do you track the success of the new feature that you've added? How does that all work?
JAYESH GOVINDARAJAN
One of the things that I love about this job is that we are not just happy shipping shelfware to customers. One of the things we start to collect right after we launch the product is usage. Pretty much everything is instrumented in the whole usage value chain of the customer.
The rollout initially is slow because we're still figuring out how to tell the story and narrative. Once it gets up to 40/50 customers, it starts to ‘hockey stick’ because customers talk to each other. They know what's working well. We have a much better sense for what to do better. One of the things I like to say is that we have this 1:10 customer journey.
DAVE COLE
Right.
JAYESH GOVINDARAJAN
It becomes a 10:100 customer journey, or maybe then a 100:1000 customer journey. These are very different challenges at each stage as we scale. Then 1,000 up is completely different.
DAVE COLE
Basically, the powers of 10, if you will.
JAYESH GOVINDARAJAN
The powers of 10.
DAVE COLE
As you're rolling this out and it’s growing, I imagine that part of this, too, is collecting customer feedback. Is that the product manager's job, to reach out to customers and see how things are going? I imagine also you're looking at that telemetry, looking at that usage data, to make sure that not only if it’s being used, but if it’s being used successfully: no errors or misuse.
JAYESH GOVINDARAJAN
Exactly. Telemetry tells you a lot but there's really no substitute for great anecdotes. It doesn't tell you where to go next.
DAVE COLE
Right. Right.
JAYESH GOVINDARAJAN
I think that's one of the key things that we do. Once we launch something, it goes to GA. There's an almost continuous feedback process that we have. We have user researchers and product managers go in, talk to customers and get their feedback. Often customers would tell us directly what they want, what's working well and what isn’t.
Sometimes that's not the case. You have to dig deeper to understand what might be going on and what you might want to do to get to the next stage. It's not always obvious, but sometimes it is.
DAVE COLE
Sure. I like that quote there. I'm going to put it on my wall: "Telemetry doesn't tell you where to go."
It tells you what happened in the past, but it may not tell you what the next great enhancement should be, for the model that you've embedded.
Any parting words of wisdom, Jayesh, if somebody right now is a data science leader working for a SaaS company, trying to figure out the best approach to getting models into production and embedded into the SaaS product?
JAYESH GOVINDARAJAN
There are two big things to think really hard about. One is to treat error conditions as opportunities to bring humans into the loop to train the system better. The sooner one does that, the more options you have for the system to learn from usage. I think the journey, once you've shipped V1, has only just begun.
DAVE COLE
All right.
JAYESH GOVINDARAJAN
You haven't really gotten the feedback loop going. That's one bit of advice for a scalable system. Second, the model is just a very small part of the entire service that we run at scale.
Those are two things that I think are key for doing these big rollouts.
DAVE COLE
Right. You mentioned that UX is obviously very important. The data that you're working with presents another huge challenge for your team, right? The data for each customer can be very different.
JAYESH GOVINDARAJAN
Yeah.
DAVE COLE
Even within the same industry, the data can be very different. Trying to standardize that in some way is a particular challenge, right?
JAYESH GOVINDARAJAN
Yeah. To get to that kind of scale, one needs to go through what we call ‘Day Zero’ models. They need to be good enough to help customers do their job, so that they can then train it more as they use it.
DAVE COLE
Right.
JAYESH GOVINDARAJAN
There’s also the Day One model. We call it the cold start problem, which is quite real. You might do all the work and ship it but if your Day One model isn't good enough, and doesn't get usage, it never gets to the second stage. It depends on usage to get better.
That's the other key piece of learning: we think really long and hard about what the Day Zero model and experience look like.
Ideally, the global model works reasonably well for customers to take a crack at solving the problem. As the customers use it, we give them that model with their specific vocabulary in it.
DAVE COLE
Right. There's a specific vocabulary, but you're getting some ground truth. Right? If it’s trying to help a Salesforce customer close deals faster and better, and you're actually seeing those deals close, that's a feedback loop to the engine to make sure that it's predicting which opportunities to prioritize. It increases the accuracy of the opportunities that Salesforce should be focusing on at the customer's side.
JAYESH GOVINDARAJAN
Yes. Absolutely.
DAVE COLE
Got it. Okay. That makes a lot of sense. I'm going to switch gears here a little bit, Jayesh, although I could talk to you a lot more about that. It sounds like a very fascinating challenge that you have. There are some common themes there that I think we heard.
Obviously there's a lot of iterating. One of the things that we talk about a lot, on the Data Science Leaders podcast, is that data science is not a one-and-done type of thing. It's not like you're building an app and waiting for the next feature. You have to iterate. You have to constantly get feedback and make sure that model is continually trained. A model that is trained once sits out there, gets stale, degrades over time etc.
A lot of those challenges that you mentioned, I think, are common to all. I also think the scale that you're dealing with, the fact that you have to generalize to a large customer base across industries, and the way in which your team goes about doing that—focusing on intent and some of the other things you've mentioned—makes a lot of sense.
Let's switch gears. You mentioned three outputs of your data science team: foundational, assisted and fully automated. Can you dig into that and describe what exactly you mean by that?
JAYESH GOVINDARAJAN
Yeah. It just happened quite naturally in looking across sales, service in general, opportunities for doing AI and machine learning within the enterprise.
Three key pillars emerge. The first one is full automation for repetitive activities. There's no reason for us to be doing mundane work. There's enough training data for us to be able to automate that.
DAVE COLE
Can you give an example of a repetitive activity that your team would automate?
JAYESH GOVINDARAJAN
Yeah, absolutely. For example, no one should ever have to find your order status for you. That can be completely self-service. You go to any retail website and you should have a chatbot that gives you, with certainty, what your order status is, where it is, when it’s going to come. If it's going to be returned, somebody should be able to ping you about what that status is. This is a classic example. There's no need to have anyone in the loop to be able to execute on that.
DAVE COLE
Right. Right.
JAYESH GOVINDARAJAN
There are a lot of examples.
DAVE COLE
So...eliminate humans from the process as much as possible?
JAYESH GOVINDARAJAN
Free up humans to do what humans can do better.
DAVE COLE
Okay. There you go. I like it.
JAYESH GOVINDARAJAN
Which brings me to the second pillar: building components that assist us in our daily lives as we work.
Imagine you're a customer service agent. There's probably an expert sitting right next to you who has been doing this for five years. You might be a newbie. A lot of what we build is essentially components that glean on the best and help the rest catch up. The best replies that are used in the system, with permission from the company, are used and permeated across the company.
Imagine you're an agent. You solve a problem. You write the document. You want the next agent to get something similar; to have that document pop up as a potential solution that they could use to solve your case.
DAVE COLE
Right.
JAYESH GOVINDARAJAN
These are all examples of assistive AI that's very much embedded into your day-to-day workflow.
So the first one, I'll start off: automate what you can. Assist where you can.
The third one is broadly at a customer relationship management level being able to optimize entire workflows. It is neither automatic nor assistive, but being able to give people full visibility into what's going on within your service center. Which people with skills are doing well? Which people with skills need some more training? This is optimizing on a broader level.
A lot of what we do falls under one of these three buckets. In fact, my teams are organized that way. One team has focused on building chatbots and automation. Another team is focused on assistive intelligence, and they work pretty closely with the business units that build these solutions. Finally, there are teams that look across all the data to figure out organizational opportunities for customers.
DAVE COLE
Got it. So the first category, which we call full automation, is dealing with some of those repetitive tasks, and a chatbot would fit into that category. It means that this is something that a human could do but really shouldn't be doing because it's so repetitive and we want humans doing things that are more value-additive, if an AI solution can replace it.
The second one you mentioned was more of that assistance: a recommendation engine, almost. If a human were responding to an email from a customer about a complaint or an issue, the recommended response might come up—nothing that they should just copy/paste, but something that they would start from and modify and improve upon.
Then the last one, I need a little bit of clarification on: optimizing entire workflows. Can you give me an example of what that might be in the last category?
JAYESH GOVINDARAJAN
Yeah. I'll give an example on what we do with several sales interactions. In order to identify opportunities to coach salespeople, you'll need the ability to go through many, many calls that they've been on, which is near impossible. It's not an assistive case because it's not meant for the sales agent who's in the context of doing a job to get help with it.
DAVE COLE
Right.
JAYESH GOVINDARAJAN
It's not a case where we're trying to automate that conversation—
DAVE COLE
—not yet, anyway.
JAYESH GOVINDARAJAN
Yeah. It's more of a scenario where you're trying to build for essentially the admins. It's not the agent or the end user you're trying to help directly. It's the admins that run the entire business. Give them visibility into what might be going well; what might not be; what are some opportunities to train people better; opportunities where you can give them very specific advice.
Maybe also on how to handle pricing conversations because it’s noticeable from the last six calls that pricing conversations present a challenge. It gives clues on where and what to tighten.
Being able to show them exactly what part of the call there is—now that's a problem for a machine. You cannot simply go through every call in your team of 100 sales agents to be able to give them that kind of advice.
DAVE COLE
Got it.
JAYESH GOVINDARAJAN
It falls under the broad optimize bucket.
DAVE COLE
Right. So it'd be helping your sales force become more productive by recommending best practices to a manager for a particular sales account executive or sales agent.
JAYESH GOVINDARAJAN
Yeah, exactly.
DAVE COLE
So it'd be advice like, "Hey, this is what we find to be the most optimal series of touches that you should have with your customer, and you're falling outside the optimal path. Let's have a conversation about that."
Is that right? That nuance and conversation is left to the manager or administrator to relay.
JAYESH GOVINDARAJAN
Yes. Exactly. In fact, it's a very different kind of persona, because their job is not to go in and close the deal.
DAVE COLE
Right.
JAYESH GOVINDARAJAN
That's someone else. It's not the job of the supervisor to get on a call to Dave or Jayesh and solve their problem. It's to make sure that overall, things are better.
DAVE COLE
Got it.
JAYESH GOVINDARAJAN
It’s a huge data challenge, actually.
DAVE COLE
Absolutely. Well, this has been great, Jayesh. Today, we talked a little bit about your engineering background and how that's led you to your current role, helping embed models in data science into the Salesforce platform, also known as Einstein.
We talked a little bit about how you organize your team into that foundational, assisted and fully automated focus. I think it’s great. I think that's universal and a great way to think about it for any data science leader out there who is looking to help their customer base in their SaaS solution, for sure.
I really appreciate you coming on the Data Science Leaders podcast. This has been fantastic.
JAYESH GOVINDARAJAN
Thank you, Dave. It was a pleasure talking to you.
DAVE COLE
If people want to reach out and get in contact with you, can they connect with you on LinkedIn or do you have any other social media?
JAYESH GOVINDARAJAN
Absolutely, LinkedIn.
DAVE COLE
Is LinkedIn the best way? Awesome. Well, hey, thanks again, and everyone out there have a great rest of your week. Thanks, Jayesh.
JAYESH GOVINDARAJAN
Thank you, Dave.
Popular episodes
What It Takes to Productize Next-Gen AI on a Global Scale
Help Me Help You: Forging Productive Partnerships with Business...
Change Management Strategies for Data & Analytics Transformations
Listen how you want
Use another app? Just search for Data Science Leaders to subscribe.About the show
Data Science Leaders is a podcast for data science teams that are pushing the limits of what machine learning models can do at the world’s most impactful companies.
In each episode, host Dave Cole interviews a leader in data science. We’ll discuss how to build and enable data science teams, create scalable processes, collaborate cross-functionality, communicate with business stakeholders, and more.
Our conversations will be full of real stories, breakthrough strategies, and critical insights—all data points to build your own model for enterprise data science success.