Discussion about this post

User's avatar
Roman Leventov's avatar

Don't treat my comments and questions below as attacks--I actually like the idea a lot and would love it to work out. I want to understand it better, though.

After the release of ChatGPT, I remember there was a wave of buzz in the crypto community about equipping smart contracts with LLMs. Did you research if it went anywhere? What are the main differences from the market intermediaries that you propose?

I want to understand better how the mechanics of this would work, let's say on the example of AI assistants.

Does market intermediary act as a proxy both for access and payments between client (e.g., user's PC or browser) and the AI service provider, like OpenRouter acting as a proxy to different LLM providers?

Is this the task of the market intermediary to source and select providers and then give the users their "opinionated" choice, or it looks more like a la carte selection by the user from providers who agreed (contracted) with the market intermediary to distribute through this intermediary "platform" on specific financial terms which ties payments to specific outcomes?

How should the change management work? If the intermediary operator decided that the currently used metrics are ineffective, and there are many providers who are already plugged in, there would be strong pressures from all sides to keep the status quo. The users who are already accustomed to the specific provider (AI assistant) would probably decide to stay with them, and subscribe to them directly rather than via the market intermediary if the provider threatens to leave the platform because the new metrics that the intermediary operators wants to implement are too costly or too sensitive for the provider. The provider may feel they have sufficient leverage because UX switch-over costs for the users are high, and they would prefer to stay with the current provider on the current terms rather than re-contract with the provider directly without any "intermediary protection" at all (albeit the current protection is thought to be suboptimal by the provider).

You mention "user interviews" several times in the post. Is the protection from gaming from the users (intentionally give bad feedback to get their money back) that users part from their money anyways, even if they give bad feedback, and it's only the provider who may suffer (not receive the money) if the feedback is bad? I'd say that user feedback is too susceptible to bias anyways, even confirmation/self-serving/sunk-cost/optimism/novelty bias, as this trendy paper suggests (developers think AI increased their productivity while it in fact decreased it, despite having zero incentive to give the tool good feedback): https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/.

> Aggregating many customers lets intermediaries tune that risk, striking a transparent balance rather than pushing it wholesale onto either side.

I didn't understand this sentence. What does "risk tuning" mean here, such as on the example of AI assistant?

Expand full comment

No posts