Podcast: What’s the price of an AI/ML product?

As firms are finding out, the unit economics of AI/ML is not exactly like software. It requires more manual manipulation of data than one might expect – including ingesting data, cleaning data, tuning models – and deployment doesn’t scale like pure software does. Every customer has their own unique datasets. The Department of Defense has had enough trouble adapting its hardware-oriented acquisition system to buying software. Will AI/ML present an even greater challenge or does it lend itself to the traditional labor services model?

The Center for Government Contracting of the George Mason University School of Business and the Wharton Aerospace Community co-hosted an important discussion on the scalability, unit economics and cost estimating methodologies of AI/ML projects with a tremendous panel including: Sheldon Fernandez, CEO of Darwin AI; Ryan Connell, DCMA Commercial Pricing; and Diego Oppenheimer, CEO with Algorithmia with his colleague Craig Perrin.

Download the full-text transcript.

Listen to the podcast, watch the video.

Overview

There were at least three types of AI/ML offerings discussed during the event: (1) infrastructure and deployment workflows that’s much like buying enterprise software tools; (2) data curation that’s much like buying labor services; and (3) algorithms and their product integration, which contains some “new” aspects because it is probabilistic rather than deterministic — though you still need to pass security and other deployment processes found in software.

When building algorithms, it’s difficult working with government because of long-lead times, requirements for cost or pricing data, demands for intellectual property, and more. Ideally, the government could define the size of opportunities and allow third-party financing to fund development on specification. Government could then “pay by the drink” as they use it.

The problem is that it is hard to define what something is worth — many government missions don’t have objective benefits. Moreover, government has little credibility that it will actually find money to put on contract when the time comes. Craig Perrin concludes that “Most AI development for the next three to seven years is going to be more services contract focused.”

While AI/ML product companies don’t need to vertically integrate with infrastructure, they should still participate in the services aspect of cleaning and tagging data. This builds organizational knowledge. Sheldon Fernandez remarked how “If you can drive your clients to success, why not do the services? You’d be, unreasonable not to. Or outsource it to somebody who’s going to get the learnings.”

Most AI companies are struggling with the question of, are we a product company or are we an AI consulting company? Given the adolescence of artificial intelligence in general, you have to provide some of that service capability to elevate your product. So right now it’s both.”

Non-deterministic

Product development in DoD is based on the presumption of deterministic planning to achieve ends. With physical and even software products, you’re able to find causality. I click this button, it takes me here. This air vehicle will have that performance envelope. Here’s Sheldon on how AI/ML is different:

And the crux of machine learning is that we are asking a machine to Intuit behavior based on looking at tremendous amounts of data. Which is different from classical software that was programmed by a human being and works in a very mathematical, predictable kind of manner.

He gives an example of how an autonomous car would turn left unexpectedly. It turned out it happened whenever the sky was a certain shade of purple, which resulted from being trained in the Nevada desert.

Ultimately with any product, the buyer wants to pay for the results. However, more clearly than other types of products, AI/ML defies the contracting assumption that outputs can be clearly specified upfront to assign responsibility and derisked by careful planning.

Pricing on Value or Cost?

Customers don’t necessarily want all the guts that go into an AI/ML product, they just want the inferences. Yet government often wants to do business based on cost. Even for firm fixed price contracts, you will often see a cost volume based on inputs. As Ellen said, “The whole system is one big management accounting system.”

As Ryan Connell reminds us, “The second something meets the definition of commercial… then you’re able to get to price analysis as opposed to cost analysis.” He said commercial procedures can be used by contracting officers not just based on the product, but also the business itself. A commercial item determination allows contracting officers to not just look at the prices of competitors, but if those don’t exist, to analyze the value to determine a fair and reasonable price:

I constantly view the idea of AI as a way to add efficiencies to whatever we’re doing. We’re even making things faster, we’re saving money, or making more intelligent decisions. One of those three things, and those all have a value.

Diego Oppenheimer agrees:

I actually really like how Ryan framed. It’s like the value at the end and, working backwards is how we approach it… So if I can reduce fraud by 0.5%, and that saves me $10 million a day — which is actual use cases in banking — now I have a framing.

These traditional return on investment (ROI) exercises make a lot of sense. However, Ellen points out how making things faster or saving money can often be quantified for government, but unlike companies that have revenue, government often doesn’t know the value of readiness or monetize improvements.

There are downsides other downsides to the ROI approach stemming from ‘black swans’ and leading to a rational stance of risk aversion. Here’s Ryan:

As you’re trying to determine that ultimate value at the end of the day, Diego mentioned something that might save you $10 million. If there’s a 50% chance it fails, ruins your company, and you go bankrupt, then that’s certainly something to think about as you’re trying to value the capability.

Programs and Funding

One of the biggest challenges to working with DoD are long lead times to funding and major winner take all programs. Diego argued that programs need to be broken down into more modular structures to keep pace with the changing work. DoD has these multi-year program plans, and the reality of how fast AI/ML moves is that they’ll be “lightyears ahead next year.” Here’s Diego:

In government, you need to go ask for a bunch of budget upfront and it needs to get approved. There’s not this way of coming in halfway through the year and saying, ‘okay, we’re going to assign this to this project, get a result, then increase it like that.’ Previous planning at a high level, on a two-year analysis or a three-year analysis, it becomes really hard to combine the agility and the ability to execute with like the budgeting process.

With these lengthy budgeting processes, is very difficult for DoD to define specific use cases and attach a concrete ROI to them which allows companies to spend private capital with the reasonable expectation of revenue should they provide a solution to the use case.

Sheldon agreed that they’ve had trouble with the long-term budgeting process, but has seen some success with a gating mechanism, or ladder approach. It is motivating when you have milestones that take you along incremental increases in capability. Usually, DoD defines its requirements and seeks a single waterfall plan to get there. This laddered approach to getting some percent of the way to the requirement, then iterating and seeing whether you’ll get to convergence, offers a better path.

Yet the laddered approach, which may look a lot like agile software sprints, puts firms back into a services mindset. It’s hard to try paying for value when you’re actually buying labor hours. As Ryan notes for agile software contracting:

The preferred method of contracting is modular. You’re talking six month options where you want someone to do this level of effort. And you’re basically doing a cost-based price analysis where that employee should be getting this much dollars per hour, et cetera.

 

… It’s really difficult to be able to in advance determine value of what that sprint might be and put that contract at a certain price without really a previous sprint or knowing what’s going to be delivered at the end.

Sheldon argued that mature organizations can quantify outcomes of AI/ML products, but they’re in the minority. Ultimately, he finds that companies have to both sell outcomes and process efficiencies, products and services:

I think most AI companies are struggling with the question of, are we a product company or are we an AI consulting company? And given the adolescence of artificial intelligence in general, you have to provide some of that service capability to elevate your product. So right now it’s both and it’s a necessary.

Eventually, I think companies should seek to move towards selling a product — perhaps like SpaceX’s relationship with NASA where they paid milestone OTA contracts for development and now SpaceX can sell Falcon 9 launches as a commercial service. I hold out hope that government can really grab a hold of consumption-based solutions to “pay by the drink” on a commercial basis. This might create the confidence for private investors to take risk on funding the development AI/ML products that solve niche government problems.

Conclusion

I’d like to thank Ellen Chang, Jerry McGinn, and Sharon Hays for helping organize and put this great event on. I’d also like to thank Sheldon Fernandez, Ryan Connell, Diego Oppenheimer, and Craig Perrin for taking part. Make sure you listen to the whole thing, tons of insights throughout! For some additional reading, see DCMA’s Software is eating the world, but at what price? and a16z’s The new business of AI (and how it’s different from traditional software. I’ll leave you with a quote about government contracts from Craig: “There’s no line item to be innovative.”

Be the first to comment

Leave a Reply