This article was originally published at VentureBeat and has been reproduced with permission.
Dan Wright just became CEO of DataRobot, a company valued at more than $2.7 billion that is promising to automate the building, deployment, and management of AI models in a way that makes AI accessible to every organization.
Following the release of version 7.0 of the DataRobot platform, Wright told VentureBeat that the industry requires a new era of democratization of AI that eliminates dependencies on data science teams. He explained that manual machine learning operations (MLOps) processes are simply not able to keep pace with changing business conditions.
This interview has been edited for brevity and clarity.
VentureBeat: Now that you’re the CEO, what is the primary mission?
Dan Wright: What I’m trying to drive is the democratization of AI. In the past, AI has been some kind of buzzword. It’s been mainly experimental. You had data scientists who are working on different data science projects. But a lot of the models that they were working on never actually made it in production or added any value. What we’re doing now is allowing our platform to be used by people who are not data scientists, as well as data scientists, to create business insights and make better decisions on an ongoing basis. That kind of opportunity is limitless right now so we’re really focused on doing that.
VentureBeat: DataRobot just released a version 7.0 update to the platform. What are the highlights?
Dan Wright: We have enhancements to every one of our products within the platform. We can monitor and manage all of your models, regardless of where they live. They can be completely outside of DataRobot and still provide alerts if there’s any sort of accuracy or model drift. Another thing is anomaly detection. One thing that’s happened in the past is a model would get thrown off when there was some sort of anomalous piece of data. Now we’re actually able to tell you this is an anomaly and ask if it should be disregarded. That way you don’t throw off your models.
The other thing that we’ve done is we’ve created what we call our app builder, which makes it much easier for us to build applications on top of the platform for different use cases. We’re going to create an ecosystem of these AI-powered applications. Then there were some additional features around bias and fairness detection. Our philosophy is that we need to alert you if there’s any sort of bias or fairness issues with respect to your model, and then allow you to configure the model as you deem fit based on your own ethics and your own values.
VentureBeat: Most AI models require a lot of manual effort to build and maintain. Are we on the cusp of moving beyond that? Are we looking at the industrialization of AI?
Wright: I think that’s spot on. We have seen a lot of what I refer to as experimental AI, where people are using disjointed point solutions and open source tools. It’s been a little bit of a black box. Those days are over. Now it’s about the industrialization of AI using an end-to-end system all the way from data prep to monitoring and managing all of your models in production. It’s decision intelligence around specific use cases. I think we’re really going to see AI take off and become real, even for people who may have failed in the past.
VentureBeat: How much data science expertise will ultimately be required? Do organizations need a data scientist?
Wright: The whole idea with DataRobot is to automate a lot of the things that data scientists had previously done manually. You don’t need to be a very highly skilled data scientist to create value with AI to drive insights. A business analyst, engineers, and executives can all get models into production and then monitor and manage all those models. It’s really important that you build data science best practices into the platform, and that everything is fully explainable with trust and governance. It’s democratizing AI, but with guardrails to make sure that people don’t get in trouble.
VentureBeat: What impact did the economic downturn brought on by the COVID-19 pandemic have on AI adoption?
Wright: I think there were a couple of ways. One is because there’s been so much volatility a human can’t take in all of this data when it’s changing that rapidly. You need AI to actually understand what’s happening in the future. If you’re a big retailer trying to determine how many jars of peanut butter are needed in a particular store, that’s incredibly complex when you layer in the pandemic and all of a sudden you have stores opening and then closing.
The other thing that we really saw with the pandemic was that there were already AI models being used in production. People woke up and realized they had no idea what was going on with those models. They had no visibility into them. All they knew is that they were very likely to be inaccurate because all the data had completely changed. We’ve seen really broad adoption of our machine learning operations (MLOps), which is the part of our platform that allows you to monitor and manage all of your different models, including a model that’s created manually with Python or any sort of open source tool. If there is any kind of drift, you can actually run challenger models in the background. It’s no longer acceptable to just say I’m going to get a model in production and come back in six months and see if it’s still accurate. You need to be managing it in real time and updating it as the data is changing.
VentureBeat; Will MLOps eventually just become an element of existing IT operations?
Wright: What we’re really starting to see is an end-to-end system. I don’t think it’s going to be so much about just MLOps in the future, I think it’s going to be about monitoring the entire lifecycle of a model and continually updating it as data is changing. What makes what we do really powerful is we don’t just have MLOps. We have MLOps for all of your models, but most importantly we combine that with automated machine learning. We’re constantly running challenger models in the background and updating the models as the data is changing to do continuous learning. That’s what you’re going to see in the future. It’s not going to be about working for six months to get a model into production.
VentureBeat: It seems like MLOps borrows concepts that were originally pioneered by DevOps practitioners. What’s going to be the relationship?
Wright: I think it’s similar but more powerful. The platform automates many of the things that were previously done manually.
VentureBeat: Most AI models are dependent on the quality of the data, and yet the quality of the data in the enterprise is often suspect. Is there some way to address that fundamental problem?
Wright: You need to be able to automate the process to tag and clean your data to apply machine learning in the first place. We acquired Paxata in December of 2019, which was a company focused on data preparation. We’ve now integrated that into our platform. The other thing that’s really important is being able to take the data in from wherever it resides. One thing that we’ve really focused on is being able to plug into any data source, whether it’s saved locally or in any cloud. We have a great partnership with Snowflake, which made its first strategic investment ever in DataRobot. That is a major pain point for a lot of companies. A lot of companies previously tried AI, but they never got past the step of Data Prep. We’re really solving that by automating a lot of the process related to Data Prep.
VentureBeat: Most AI training today occurs in the cloud. Will training of AI models soon be moving all the way out to edge computing platforms?
Wright: We’re already seeing that, and it’s opening up new possibilities. The other thing that we’re seeing is AI is being used now on different types of data sources that were never previously possible. We have the ability now to take not just text data, but also image data, geospatial data, and many other types of data. You can combine them all into one model and generate predictions and decision intelligence. Humans have all of these different senses. Now AI is going to have all of those different senses, and the edge is definitely a direction that this technology is moving.
VentureBeat: Will the algorithms ever get smart enough to tell us not the answer to a question but also the right questions to ask?
Wright: How we look at it is you want the AI to get as smart as possible. That requires that you have as much data as possible and that you’re continually improving your algorithms. But it’s not going to be about just AI or machine intelligence. It’s this combination of human intelligence with machine intelligence. That’s what’s going to create amazing opportunities in every industry in the future. There’s always going to be a human in the loop. I don’t think AI can be too smart so long as you’ve got that human in the loop.
VentureBeat: Is it possible one day AI models created for conflicting purposes ultimately just nullify each other?
Wright: I’ll answer that question in a couple of ways. We are seeing kind of a rush to adopt this technology. Many people have referred to this as a fourth industrial revolution, but there’s always going to be a first mover advantage. With AI, that is even greater because of the feedback loop you get with algorithms that are constantly getting better and better. The leaders when it comes to AI are going to be the big winners over the next decade, and the losers really may never catch up. There is a very large sense of urgency to adopt the technology. But it’s unlikely that people will adopt it exactly at the same rate, but let’s just say for argument’s sake they do. You’ll end up getting a much more efficient market.
VentureBeat: What’s your best AI advice to organizations right now?
Wright: Too few companies are actually asking what should be an obvious question. What value is actually being delivered from my AI? A lot of people have big budgets and have been spending tens of millions of dollars for years with some of the legacy vendors in the space. They’re not getting any value, and they’re not even actually looking to see if they’re getting any value. That’s no longer acceptable. You need to know in real time what is the value that you’re getting from all of the models in production, and where are opportunities to drive more value? This is a race, and whoever is able to get value fastest is likely going to win in the market. The other thing that has flown a little under the radar is this idea of trust. It’s not enough to just use open source tools or a bunch of disjointed solutions to try to experiment with AI. You actually need a system that has trust built into the very foundation so it’s not a black box.