18 Comments

Great analysis. It's much more ASML than it is MSDOS.

Still awaiting your take on Mosaic.

Expand full comment

I miss that little dinosaur.

Expand full comment

Different Mosaic

Expand full comment

whoops

Expand full comment

> This isn’t Linux where a single developer built the product as a gift to humanity

Linux success as a **product** was largely thanks to RedHat and others. Who kept key maintainers on payroll for decades.

So for the most part Linux falls in line with the message of the article.

Expand full comment

Interesting, but clearly biased. The author didnteve try to hid their antagonism. Of course, working at a fund would not particularly endear your to a product you can't sell.

Expand full comment

I don't have a horse in any of these races. What would your argument be against the diminishing returns and higher costs point? From the outside, looking in, that seems salient.

Expand full comment

One thing you completely miss is the very likely (considering the rate of development in the past 15 years in AI) radical new developments in language models. Many people (including Sam Altman) believe the next breakthrough in these models will NOT be compute intensive.

Expand full comment

Interesting, he calls Zack a “savvy capitalist”. Not a savvy businessman.

Expand full comment

I’m very concerned that centralization and closed sourcing of this stuff leads to some messed up dystopian situations, but maybe not? Could decentralized (dePIN) be just as nasty? Is there truly no way to have both?

Expand full comment

Enjoy hearing your perspective. Thanks!

Expand full comment

Ha ha, the same arguments against open source that Microsoft used to fill their marketing ads with. Still wrong for the exact same reasons.

Expand full comment

Solid read — thanks for breaking it down your POV. Do you think countries will ever have their sovereign AI models or will that be outsourced the corporate counterpart?

Expand full comment

I agree with your analysis. Here are some points I had previously predicted:

Synthetic data is a developing research field without a dominant technology yet. Models like DALLE-3 and Sora’s Re-caption are examples of synthetic data applications, generating machine-made data.

Investment in synthetic data research should be increased to build future competitiveness.

By next year, high-quality new data may be insufficient, and simply increasing data volume won’t meet model development needs. If synthetic data doesn’t see breakthroughs in the next two years, large model development may slow, raising questions about the AIGC development model.

There are two possible futures for synthetic data:

1. If it fails to become practical, model capabilities may plateau, and the gap between open-source and closed-source models may narrow, which would be disadvantageous for closed-source model companies.

2. If synthetic data or technology breakthroughs occur, data utilization efficiency will improve, enhancing model capabilities. However, this will require significant resource investment, potentially putting open-source models behind closed-source models.

In the AI field, there are two main development directions:

1. Increasing model parameters to enhance performance, as seen with GPT-4, which reportedly has trillions of parameters and exhibits excellent performance.

2. Focusing on the scale and quality of data, exemplified by the Llama 3 model, which is trained using an extremely large dataset.

Expand full comment

Very biased. IOs is better than Android? Really.

Expand full comment

Why not position Meta as the next Redhat by expanding their services to support users of LLaMA? It’s a win-win-win scenario. They generate revenue through service offerings, endear Meta to the open-source community, attract top-tier talent, and rebrand Meta as a champion of humanity.

Expand full comment

Certainly an interesting read, provoking many thoughts. It’s enough to forgive the clickbait title.

Expand full comment

Solid analysis. Do you remember our bet?

Expand full comment