If I think of myself as an engineer, some of my time is debugging and thinking about what to write. But [the 10x engineer] would spend about 30 minutes in the morning thinking about what to do and then the rest was just output. I think the odds of a piece of code that you write is going to work on the first run is very predictive of intelligence. There’s a lot of people who are like, that didn’t work well let me use brute force.
I’ve often thought that we need different names, because the person who is doing that in their head — they write it, it runs — they’re doing a different trade than the person who can’t figure out how to get the CSS to work, has to change every single parameter based on the googling they do on stack overflow until it works. These two people do not have the same profession.
… Here’s an interesting thing that might be true across the board. What they consider easy work is what a lot of other people consider a lot of hard work. Their baseline sense of what a task is is really different from other people.
That was Daniel Gross on the World of DaaS podcast. He said earlier in the podcast that prior to product-market fit, it’s really hard to know if the product idea is right. The best strategy is to find a founder who is super energetic and will take a lot of shots on goal because their first guess is almost certain to be wrong. This perhaps sounds like the “great man” theory, but Daniel says that while there may be some innate talent crucial for mastery of chess or mathematics, entrepreneurship is mostly ambition and energy which is available to all of us.
Notice this reasoning is very different than in DoD. Funding is decided based on projects, not people. People are interchangeable, executing to a baseline plan developed by a slew of siloed functional experts. But note the danger here. Some people have a very different sense of what tasks should be done and how hard they are. There is no objectively correct answer to future developments, as DoD program analysis often presumes. Indeed, defense analyses likely lead to poor outcomes because analysis of historical data cannot provide anticipations of what is new and valuable.
Perhaps the founder model has a place in defense. A professional may spend years working in a lab, program office, or in the field, integrating concepts and technologies in their mind. When they have something to propose, select people based on their energy and knowledge, the program will follow. Underfund them to start, and make them prove efficacy along the way with regular funding rounds. Filter for the very best people then pile on responsibility.
Leave a Reply