Industrializing Data Science: Transform into an End-to-End, Analytics-Oriented Master of Repeatable Intelligence Delivery
Page 1 of 12
Industrializing Data Science:
Transform into an End-to-End,
Analytics-Oriented Master of
Repeatable Intelligence Delivery
Transcript of a discussion on the latest methods, tools, and thinking around making data science
an integral core function of any business.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett
Dana Gardner: Hello, and welcome to the next BriefingsDirect Voice of Analytics
Innovation podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions,
your host and moderator for this ongoing discussion on the latest insights into data
science advances and strategy.
Businesses these days are quick to declare their intention to become data-driven, yet
the deployment of analytics and the use of data science remains spotty, isolated, and
often uncoordinated. To fully reach their digital business transformation potential,
businesses large and small need to make data science more of a repeatable assembly
line -- an industrialization, if you will, of end-to-end data exploitation.
Stay with us now as we explore the latest
methods, tools, and thinking around making data
science an integral core function that both
responds to business needs and scales to
improve every aspect of productivity.
To learn more about the ways that data and
analytics behave more like a factory -- and less
like an Ivory Tower -- please join me now in
welcoming Doug Cackett, EMEA Field Chief
Technology Officer at Hewlett Packard
Enterprise. Welcome, Doug.
Doug Cackett: Thank you so much, Dana.
Gardner: Doug, why is there a lingering gap --
and really a gaping gap -- between the amount of
data available and the analytics that should be taking advantage of it?
Page 2 of 12
Data’s potential on edge
Cackett: That’s such a big question to start with, Dana, to be honest. We probably
need to accept that we’re not doing things the right way at the moment. Actually,
Forrester suggests that something like 40 zettabytes of data are going to be under
management by the end of this year, which is quite enormous.
And, significantly, more of that data is being generated at the edge through applications,
Internet of Things (IoT), and all sorts of other things. This is where the customer meets
your business. This is where you’re going to have to start making decisions as well.
So, the gap is two things. It’s the gap between the amount of data that’s being generated
and the amount you can actually comprehend and create value from. In order to
leverage that data from a business point of view, you need to make decisions at the
You will need to operationalize those decisions and move that capability to the edge
where your business meets your customer. That’s the challenge we’re all looking for
machine learning (ML) -- and the operationalization of all of those ML models into
applications -- to make the difference.
Gardner: Why does HPE think that moving more toward a factory model, industrializing
data science, is part of the solution to compressing and removing this gap?
Cackett: It’s a math problem, really, if you
think about it. If there is exponential
growth in data within your business, if
you’re trying to optimize every step in
every business process you have, then
you’ll want to operationalize those insights
by making your applications as smart as
they can possibly be. You’ll want to
embed ML into those applications.
Because, correspondingly, there’s exponential growth in the demand for analytics in your
business, right? And yet, the number of data scientists you have in your organization -- I
mean, growing them exponentially isn’t really an option, is it? And, of course, budgets
are also pretty much flat or declining.
So, it’s a math problem because we need to somehow square away that equation. We
somehow have to generate exponentially more models for more data, getting to the
edge, but doing that with fewer data scientists and lower levels of budget.
Industrialization, we think, is the only way of doing that. Through industrialization, we can
remove waste from the system and improve the quality and control of those models. All
of those things are going to be key going forward.
If there is exponential growth in data
within your business, if you’re trying
to optimize every step in every
business process you have, then
you’ll want to operationalize those
insights by making your applications
as smart as they can possibly be.
Page 3 of 12
Gardner: When we’re thinking about such industrialization, we shouldn’t necessarily be
thinking about an assembly line of 50 years ago -- where there are a lot of warm bodies
lined up. I’m thinking about the Lucille Ball assembly line, where all that candy was
coming down and she couldn’t keep up with it.
Perhaps we need more of an ultra-modern assembly line, where it’s a series of robots
and with a few very capable people involved. Is that a fair analogy?
Industrialization puts data science on the line
Cackett: I think that’s right. Industrialization is about manufacturing where we replace
manual labor with mechanical mass production. We are not talking about that. Because
we’re not talking about replacing the data scientist. The data scientist is key to this. But
we want to look more like a modern car plant, yes. We want to make sure that the data
scientist is maximizing the value from the data science, if you like.
We don’t want to go hunting around for the right tools to use. We don’t want to wait for
the production line to play catch up, or for the supply chain to catch up. In our case, of
course, that’s mostly data or waiting for infrastructure or waiting for permission to do
something. All of those things are a complete waste of their time.
As you look at the amount of productive time data scientists spend creating value, that
can be pretty small compared to their non-productive time -- and that’s a concern. Part of
the non-productive time, of course, has been with those data scientists having to
discover a model and optimize it. Then they would do the steps to operationalize it.
But maybe doing the data and operations engineering things to operationalize the model
can be much more efficiently done with another team of people who have the skills to do
that. We’re talking about specialization here, really.
But there are some other learnings as well. I recently wrote a blog about it. In it, I looked
at the modern Toyota production system and started to ask questions around what we
could learn about what they have learned, if you like, over the last 70 years or so.
It was not just about automation, but also
how they went about doing research and
development, how they approached
tooling, and how they did continuous
improvement. We have a lot to learn in
For an awful lot of organizations that I deal with, they haven’t had a lot of experience
around such operationalization problems. They haven’t built that part of their assembly
line yet. Automating supply chains and mistake-proofing things, what Toyota called
jidoka, also really important. It’s a really interesting area to be involved with.
It was not just about automation, but
also how they went about doing
research and development, how
they approached tooling, and how
they did continuous improvement.
Page 4 of 12
Gardner: Right, this is what US manufacturing, in the bricks and mortar sense, went
through back in the 1980s when they moved to business process reengineering,
adopted kaizen principles, and did what Deming and more quality-emphasis had done
for the Japanese auto companies.
And so, back then there was a revolution, if you will, in physical manufacturing. And now
it sounds like we’re at a watershed moment in how data and analytics are processed.
Cackett: Yes, that’s exactly right. To extend that analogy a little further, I recently saw a
documentary about Morgan cars in the UK. They’re a hand-built kind of car company.
Quite expensive, very hand-built, and very specialized.
And I ended up by almost throwing things at the TV because they were talking about the
skills of this one individual. They only had one guy who could actually bend the metal to
create the bonnet, the hood, of the car in the way that it needed to be done. And it took
two or three years to train this guy, and I’m thinking, “Well, if you just automated the
process, and the robot built it, you wouldn’t need to have that variability.” I mean, it’s just
so annoying, right?
In the same way, with data science we’re
talking about laying bricks -- not Michelangelo
hammering out the figure of David. What I’m
really trying to say is a lot of the data science in
our customer’s organizations are fairly
mundane. To get that through the door, get it
done and dusted, and give them time to do the other bits of finesse using more skills --
that’s what we’re trying to achieve. Both [the basics and the finesse] are necessary and
they can all be done on the same production line.
Gardner: Doug, if we are going to reinvent and increase the productivity generally of
data science, it sounds like technology is going to be a big part of the solution. But
technology can also be part of the problem.
What is it about the way that organizations are deploying technology now that needs to
shift? How is HPE helping them adjust to the technology that supports a better data
Define and refine
Cackett: We can probably all agree that most of the tooling around MLOps is relatively
young. The two types of company we see are either companies that haven’t yet gotten to
the stage where they’re trying to operationalize more models. In other words, they don’t
really understand what the problem is yet.
With data science we’re
talking about laying bricks –
not Michelangelo hammering
out the figure of David.
Page 5 of 12
Forrester research suggests that only 14 percent of organizations that they surveyed
said they had a robust and repeatable operationalization process. It’s clear that the other
86 percent of organizations just haven’t refined what they’re doing yet. And that’s often
because it’s quite difficult.
Many of these organizations have only just linked their data science to their big data
instances or their data lakes. And they’re using it both for the workloads and to develop
the models. And therein lies the problem. Often they get stuck with simple things like
trying to have everyone use a uniform environment. All of your data scientists are both
sharing the data and sharing the computer environment as well.
And data scientists can often be very destructive in what they’re doing. Maybe
overwriting data, for example. To avoid that, you end up replicating the data. And if
you’re going to replicate terabytes of data, that can take a long period of time. That also
means you need new resources, maybe new more compute power and that means
approvals, and it might mean new hardware, too.
Often the biggest challenge is in provisioning the
environment for data scientists to work on, the data
that they want, and the tools they want. That can all
often lead to huge delays in the process. And, as we
talked about, this is often a time-sensitive problem.
You want to get through more tasks and so every
delayed minute, hour, or day that you have
becomes a real challenge.
The other thing that is key is that data science is very peaky. You’ll find that data
scientists may need no resources or tools on Monday and Tuesday, but then they may
burn every GPU you have in the building on Wednesday, Thursday, and Friday. So,
managing that as a business is also really important. If you’re going to get the most out
of the budget you have, and the infrastructure you have, you need to think differently
about all of these things. Does that make sense, Dana?
Gardner: Yes. Doug how is HPE Ezmeral being designed to help give the data
scientists more of what they need, how they need it, and that helps close the gap
between the ad hoc approach and that right kind of assembly line approach?
Two assembly lines, to start
Cackett: Look at it as two assembly lines, at the very minimum. That’s the way we want
to look at it. And the first thing the data scientists are doing is the discovery.
The second is the MLOps processes. There will be a range of people operationalizing
the models. Imagine that you’re a data scientist, Dana, and I’ve just given you a task.
Let’s say there’s a high defection or churn rate from our business, and you need to
Often the biggest challenge
is in provisioning the
environment for data
scientists to work on, the
data that they want, and the
tools they want.
Page 6 of 12
First you want to find out more about the problem because you might have to break that
problem down into a number of steps. And then, in order to do something with the data,
you’re going to want an environment to work in. So, in the first step, you may simply
want to define the project, determine how long you have, and develop a cost center.
You may next define the environment: Maybe you need CPUs or GPUs. Maybe you
need them highly available and maybe not. So you’d select the appropriate-sized
environment. You then might next go and open the tools catalog. We’re not forcing you
to use a specific tool; we have a range of tools available. You select the tools you want.
Maybe you’re going to use Python. I know you’re hardcore, so you’re going to code
using Jupyter and Python.
And the next step, you then want to find the right data, maybe through the data catalog.
So you locate the data that you want to use and you just want to push a button and get
provisioned for that lot. You don’t want to have to wait months for that data. That should
be provisioned straight away, right?
You can do your work, save all your work away into a virtual repository, and save the
data so it’s reproducible. You can also then check the things like model drift and data
drift and those sorts of things. You can save the code and model parameters and those
sorts of things away. And then you can put that on the backlog for the MLOps team.
Then the MLOps team picks it up and goes through a similar data science process. They
want to create their own production line now, right? And so, they’re going to seek a
different set of tools. This time, they need continuous integration and continuous delivery
(CICD), plus a whole bunch of data stuff they want to operationalize your model. They’re
going to define the way that that model is going to be deployed. Let’s say, we’re going to
use Kubeflow for that. They might decide on, say, an A/B testing process. So they’re
going to configure that, do the rest of the work, and press the button again, right?
Clearly, this is an ongoing process.
Fundamentally that requires workflow
and automatic provisioning of the
environment to eliminate wasted time,
waiting for stuff to be available. It is
fundamentally what we’re doing in our
But in the wider sense, we also have consulting teams helping customers get up to
speed, define these processes, and build the skills around the tools. We can also do this
as-a-service via our HPE GreenLake proposition as well. Those are the kinds of things
that we’re helping customers with.
Gardner: Doug, what you’re describing as needed in data science operations is a lot like
what was needed for application development with the advent of DevOps several years
ago. Is there commonality between what we’re doing with the flow and nature of the
Clearly, this is an ongoing process.
Fundamentally, that requires workflow
and automatic provisioning of the
environment to eliminate wasted time.
Page 7 of 12
process for data and analytics and what was done not too long ago with application
development? Isn’t that also akin to more of a cattle approach than a pet approach?
Operationalize with agility
Cackett: Yes, I completely agree. That’s exactly what this is about and for an MLOps
process. It’s exactly that. It’s analogous to the sort of CICD, DevOps, part of the IT
business. But a lot of that tool chain is being taken care of by things like Kubeflow and
MLflow Project, some of these newer, open source technologies.
I should say that this is all very new, the ancillary tooling that wraps around the CICD.
The CICD set of tools are also pretty new. What we’re also attempting to do is allow you,
as a business, to bring these new tools and on-board them so you can evaluate them
and see how they might impact what you’re doing as your process settles down.
The idea is to put them in a wrapper and make them available so we get a more
dynamic feel to this. The way we’re doing MLOps and data science generally is
progressing extremely quickly at the moment. So you don’t want to lock yourself into a
corner where you’re trapped into a particular workflow. You want to be able to have
agility. Yes, it’s very analogous to the DevOps movement as we seek to operationalize
the ML model.
The other thing to pay attention to are the
changes that need to happen to your
operational applications. You’re going to
have to change those so they can tool the
ML model at the appropriate place, get the
result back, and then render that result in
whatever way is appropriate. So changes to the operational apps are also important.
Gardner: You really couldn’t operationalize ML as a process if you’re only a tools
provider. You couldn’t really do it if you’re a cloud services provider alone. You couldn’t
just do this if you were a professional services provider.
It seems to me that HPE is actually in a very advantageous place to allow the best-of-
breed tools approach where it’s most impactful but to also start put some standard glue
around this -- the industrialization. How is HPE is an advantageous place to have a
meaningful impact on this difficult problem?
Cackett: Hopefully, we’re in an advantageous place. As you say, it’s not just a tool, is it?
Think about the breadth of decisions that you need to make in your organization, and
how many of those could be optimized using some kind of ML model.
You’d understand that it’s very unlikely that it’s going to be a tool. It’s going to be a range
of tools, and that range of tools is going to be changing almost constantly over the next
10 and 20 years.
You’re going to have to change
[your operational applications] so
they can tool the ML model at the
appropriate place, get the result
back, and then render that result.
Page 8 of 12
This is much more to do with a platform approach because this area is relatively new.
Like any other technology, when it’s new it almost inevitably to tends to be very technical
in implementation. So using the early tools can be very difficult. Over time, the tools
mature, with a mature UI and a well-defined process, and they become simple to use.
But at the moment, we’re way up at the other end. And so I think this is about platforms.
And what we’re providing at HPE is the platform through which you can plug in these
tools and integrate them together. You have the freedom to use whatever tools you
want. But at the same time, you’re inheriting the backend system. So, that’s Active
Directory and Lightweight Directory Access Protocol (LDAP) integrations, and that’s
linkage back to the data, your most precious asset in your business. Whether that be in
a data lake or a data warehouse, in data marts or even streaming applications.
This is the melting point of the business at the moment. And HPE has had a lot of
experience helping our customers deliver value through information technology
investments over many years. And that’s certainly what we’re trying to do right now.
Gardner: It seems that HPE Ezmeral is moving toward industrialization of data science,
as well as other essential functions. But is that where you should start, with
operationalizing data science? Or is there a certain order by which this becomes more
fruitful? Where do you start?
Machine learning leads the change
Cackett: This is such a hard question to answer, Dana. It’s so dependent on where you
are as a business and what you’re trying to achieve. Typically, to be honest, we find that
the engagement is normally with some element of change in our customers. That’s
often, for example, where there’s a new digital transformation initiative going on. And
you’ll find that the digital transformation is being held back by an inability to do the data
science that’s required.
There is another Forrester report that I’m sure
you’ll find interesting. It suggests that 98
percent of business leaders feel that ML is
key to their competitive advantage. It’s hardly
surprising then that ML is so closely related to
digital transformation, right? Because that’s
about the stage at which organizations are
competing after all.
So we often find that that’s the starting point, yes. Why can’t we develop these models
and get them into production in time to meet our digital transformation initiative? And
then it becomes, “Well, what bits do we have to change? How do we transform our
MLOps capability to be able to do this and do this at scale?”
It’s hardly surprising that ML is
so closely related to digital
transformation, because that’s
about the stage at which
organizations are competing.
Page 9 of 12
Often this shift is led by an individual in an organization. There develops a momentum in
an organization to make these changes. But the changes can be really small at the start,
of course. You might start off with just a single ML problem related to digital
We acquired MapR some time ago, which is now our HPE Ezmeral Data Fabric. And it
underpins a lot of the work that we’re doing. And so, we will often start with the data, to
be honest with you, because a lot of the challenges in many of our organizations has to
do with the data. And as businesses become more real-time and want to connect more
closely to the edge, really that’s where the strengths of the data fabric approach come
So another starting point might be the data. A new application at the edge, for example,
has new, very stringent requirements for data and so we start there with building these
data systems using our data fabric. And that leads to a requirement to do the analytics
and brings us obviously nicely to the HPE Ezmeral MLOps, the data science proposition
that we have.
Gardner: Doug, is the COVID-19 pandemic prompting people to bite the bullet and
operationalize data science because they need to be fleet and agile and to do things in
new ways that they couldn’t have anticipated?
Cackett: Yes, I’m sure it is. We know it’s happening; we’ve seen all the research.
McKinsey has pointed out that the pandemic has accelerated a digital transformation
journey. And inevitably that means more data science going forward because, as we
talked about already with that Forrester research, some 98 percent think that it’s about
competitive advantage. And it is, frankly.
The research goes back a long way to
people like Tom Davenport, of course, in his
famous Harvard Business Review article.
We know that customers who do more with
analytics, or better analytics, outperform
their peers on any measure. And ML is the
next incarnation of that journey.
Gardner: Do you have any use cases of organizations that have gone to the
industrialization approach to data science? What is it done for them?
Financial services firms reap benefits
Cackett: I’m afraid names are going to have to be left out. But a good example is in
financial services. They have a problem in the form of many regulatory requirements.
When HPE acquired BlueData it gained an underlying technology, which we’ve
transformed into our MLOps and container platform. BlueData had a long history of
containerizing very difficult, problematic workloads. In this case, this particular financial
Customers who do more with
analytics, or better analytics,
outperform their peers on any
measure. And ML is the next
incarnation of that journey.
Page 10 of 12
services organization had a real challenge. They wanted to bring on new data scientists.
But the problem is, every time they wanted to bring a new data scientist on, they had to
go and acquire a bunch of new hardware, because their process required them to
replicate the data and completely isolate the new data scientist from the other ones. This
was their process. That’s what they had to do.
So as a result, it took them almost six months to do anything. And there’s no way that
was sustainable. It was a well-defined process, but it’s still involved a six-month wait
So instead we containerized their Cloudera implementation and separated the compute
and storage as well. That means we could now create environments on the fly within
minutes effectively. But it also means that we can take read-only snapshots of data. So,
the read-only snapshot is just a set of pointers. So, it’s instantaneous.
They were able to scale-out their data science
without scaling up their costs or the number of
people required. Interestingly, recently, they’ve
moved that on further as well. Now doing all of
that in a hybrid cloud environment. And they only
have to change two lines of code to allow them
to push workloads into AWS, for example, which is pretty magical, right? And that’s
where they’re doing the data science.
Another good example that I can name is GM Finance, a fantastic example of how
having started in one area for business -- all about risk and compliance -- they’ve been
able to extend the value to things like credit risk.
But doing credit risk and risk in terms of insurance also means that they can look at
policy pricing based on dynamic risk. For example, for auto insurance based on the way
you’re driving. How about you, Dana? I drive like a complete idiot. So I couldn’t possibly
afford that, right? But you, I’m sure you drive very safely.
But in this use-case, because they have the data science in place it means they can
know how a car is being driven. They are able to look at the value of the car, the end of
that lease period, and create more value from it.
These are types of detailed business outcomes we’re talking about. This is about giving
our customers the means to do more data science. And because the data science
becomes better, you’re able to do even more data science and create momentum in the
organization, which means you can do increasingly more data science. It’s really a very
Gardner: Doug, if I were to come to you in three years and ask similarly, “Give me the
example of a company that has done this right and has really reshaped itself.” Describe
what you think a correctly analytically driven company will be able to do. What is the end
They were able to scale-out
their data science without
scaling up their costs or the
number of people required.
Page 11 of 12
The future is data-science driven
Cackett: I can answer that in two ways. One relates to talking to an ex-colleague who
worked at Facebook. And I’m so taken with what they were doing there. Basically, he
said, what originally happened at Facebook, in his very words, is that to create a new
product in Facebook they had an engineer and a product owner. They sat together and
they created a new product.
Sometime later, they would ask a data scientist to get involved, too. That person would
look at the data and tell them the results.
Then they completely changed that around. What they now do is first find the data
scientist and bring him or her on board as they’re creating a product. So they’re
instrumenting up what they’re doing in a way that best serves the data scientist, which is
The data science is built-in from the start. If you ask
me what’s going to happen in three years’ time, as we
move to this democratization of ML, that’s exactly
what’s going to happen. I think we’ll end up genuinely
being information-driven as an organization.
That will build the data science into the products and the applications from the start, not
tack them on to the end.
Gardner: And when you do that, it seems to me the payoffs are expansive -- and
Cackett: Yes. That’s the competitive advantage and differentiation we started off talking
about. But the technology has to underpin that. You can’t deliver the ML without the
technology; you won’t get the competitive advantage in your business, and so your
digital transformation will also fail.
This is about getting the right technology with the right people in place to deliver these
kinds of results.
Gardner: I’m afraid we’ll have to leave it there. You’ve been with us as we explored how
businesses can make data science more of a repeatable assembly line – an
industrialization, if you will -- of end-to-end data exploitation. And we’ve learned how
HPE is ushering in the latest methods, tools, and thinking around making data science
an integral core function that both responds to business needs and scales to improve
nearly every aspect of productivity.
So please join me in thanking our guest, Doug Cackett, EMEA Field Chief Technology
Officer at HPE. Thank you so much, Doug. It was a great conversation.
As we move to this
democratization of ML,
the data science will be
built-in from the start.
Page 12 of 12
Cackett: Yes, thanks everyone. Thanks, Dana.
Gardner: And a big thank you as well to our audience for joining this sponsored
BriefingsDirect Voice of Analytics Innovation discussion. I’m Dana Gardner, Principal
Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard
Thanks again for listening. Please pass this along to your IT community, and do come
back next time.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett
Transcript of a discussion on the latest methods, tools, and thinking around making data science
an integral core function of any business. Copyright Interarbor Solutions, LLC, 2005-2020. All
You may also be interested in:
• The journey to modern data management is paved with an inclusive edge-to-cloud Data
• The IT intelligence foundation for digital business transformation rests on HPE InfoSight
• Nimble Storage leverages big data and cloud to produce data performance optimization
on the fly
• How Digital Transformation Navigates Disruption to Chart a Better Course to the New
• How REI used automation to cloudify infrastructure and rapidly adjust its digital pandemic
• How the right data and AI deliver insights and reassurance on the path to a new normal
• How IT modern operational services enables self-managing, self-healing, and self-
• How HPE Pointnext Services ushers businesses to the new normal via an inclusive nine-
• As containers go mainstream, IT culture should pivot to end-to-end DevSecOps