nis0s 10 hours ago

> But AI does not behave like software. Its economics resemble the economics of infrastructure. Valuations may appear disconnected from productivity. Capital may look like it is circulating in a self-reinforcing pattern. Spending may appear excessive. Yet these dynamics appear irrational only through the lens of consumer technology.

But the problem is that at the end, it is consumer technology because money is made when someone buys whatever it is you’re selling. The problem that gets neglected is that LLMs are not AI, and LLM tools are not capable enough by themselves to conduct the affairs of people. So all that spending for what? Something that will have an attrition of customers when your quality inevitably goes down?

The leadership that wants to use LLM tools without quality assurance is assuming their revenue will remain the same, but it won’t if your customers leave for a better product or service, when your quality goes down.

The other issue is: let’s imagine we’ve created AI. Why shouldn’t it replace CEOs with multi-million/billion salaries? That would be more efficient, and conforming to fiduciary duties. If AI can think for itself, finally, then we replace workers and working, and everyone becomes a speculative trader for a living. How do we ensure this system is sustainable when it’s easier for automated systems to coordinate at scale, and cause security risks or other serious problems.

Humans will always be needed as supervisory components in automated decision systems, otherwise people are just playing with toys beyond their comprehension or control, and need to be replaced with someone more knowledgeable and responsible.