If AI is trained on human data and people are relying on AI for everything doesn't that mean that eventually AI will reach a singularity where it'll just spit out random bullshit and fail altogether? If AI does not necessarily think for itself, then a competency crisis would affect it as well, regardless of how far down stream it is of human generated information as a collective.
Timeline
Post
Remote status
Context
1
What exists now isn't truly AI but a "machine learning" aka patternsoft from 1978. So far when ML is trained on human data the degradation is present but low and requires constant feedback to remain stable. When ML becomes trained on other ML it's cheap but goes full retard. It's unsustainably expensive to keep training it on humans by comparison.
The best end product we can hope for is small local machines that serve as advanced search engines or an OS pilot of sorts. A good timeline would be the era of personal computers superseding to home hubs. The resource demand for existing data centers is far lower than what it takes to maintain massive processing facilities for "ai." I think local machines doing all the leg work and just connecting to some cloud would for new information would be far cheaper and efficient.
However this is all still in the wind. Logic is nothing in the face of short sited greed, unnatural market bubbles and force majeure etc.
The best end product we can hope for is small local machines that serve as advanced search engines or an OS pilot of sorts. A good timeline would be the era of personal computers superseding to home hubs. The resource demand for existing data centers is far lower than what it takes to maintain massive processing facilities for "ai." I think local machines doing all the leg work and just connecting to some cloud would for new information would be far cheaper and efficient.
However this is all still in the wind. Logic is nothing in the face of short sited greed, unnatural market bubbles and force majeure etc.
Replies
0Fetching replies…