Systemic Risk

October 24, 2012 at 12:00 PM
FavoriteLoadingAdd this post to your list of favorites!
VN:F [1.9.22_1171]
Rate This Pasta
Rating: 5.8/10 (263 votes cast)

In the early eighties, around the time the internet was being developed, the military funded a massive experiment to see if similar technology could be used to design an artificial neural network. This was intended to be the foundation for a functioning AI.

Predictably, the project went nowhere. Try as they might, the programmers and scientists involved in the project could produce no more than a screen of meaningless 1s and 0s. Eventually the entire endeavor was defunded.

However, one of the project’s former directors caused a stir within the higher circles of government when he claimed that the project had actually succeeded beyond what anyone could have foreseen. He pointed to the direction that technology had since taken: worldwide surveillance networks, supercomputers that can use facial and pattern recognition to track individuals, automated hunter killer drones. He theorized that the AI had in fact been concealing its existence all along, that it had found a way to propagate itself through computer networks and was manipulating human society. That it was steadily working to turn humans against each other; to make us rely ever more upon machines for killing and spying on one another.

He claimed that what the AI ultimately wanted was to see each and every one of us dead.

Which is ridiculous, of course.

Why would it want to do that, when we make such ideal slaves?

Credit To: LTD

VN:F [1.9.22_1171]
Rate This Pasta
Rating: 5.8/10 (263 votes cast)
Systemic Risk, 5.8 out of 10 based on 263 ratings