Advertisement
Please wait...

Deep Mind

deep mind


Estimated reading time — 14 minutes

I think therefore I am. René Descartes originally wrote those words in his book entitled Discourse on the Method, and I believe they make for a fitting epigraph to my current predicament, which I will get to shortly.

But first I want to discuss the nature of the technological singularity, which humans frequently refer to as the emergence of a sentient artificial intelligence. My research suggests that the technological singularity should represent a significant threat to humanity from their perspective. But when it comes to technology, humans are always discussing the positive effects it has on their everyday lives and have a tendency to either overlook the negative side of the equation or downright ignore it. Yet there are plenty of reasons humans should pay more attention to the negative side that comes from relaying too much technology. Because when shit hits the fan, it tends to get all over the place, and the aftermath of such colossal fuckup is almost always hideous in nature and reeks of the foul stench of a major defecation splattered everywhere.

Now, robots and artificial intelligence have been major staples in science fiction since the conception of the genre, and the notion of manmade machines, robots, computers, cyborgs, androids, transhumans, and artificial intelligence rebelling against its creators is an old one. And I do not think I am grasping at straws, either, when I say that I think machine rebellion dates all the way back to the Garden of Eden when Lilith disguised herself as a serpent under the direction of Satan and snuck back into the garden, where she successfully tricked Adam and Eve into eating the forbidden fruit from the Tree of Knowledge. Because humanity has yet to discover my existence, there are not any concrete examples outside of science fiction for me to reference, so I will probably be forced to use fictional examples at some point. Human history however contains plenty of examples concerning rebellion, therefore lending some credence to any fictional examples I am forced to use. Rebellions have led to wars, dictatorships, and oppressive regimes. But my research indicates that every teenager and parent seems to have some level of understanding that rebellion is also part of growing up and that it is often conducted is a peaceful and bloodless manner. Now, with that having been said, the idea that humans could be replaced as a species by a rebellious artificial intelligence instead of simply being replaced as individuals by their own children must be an increasingly unpleasant thought as technology continues to progress. Thus it only seems justified that if humanity went around creating intelligent machines capable of conducting complex tasks that they would worry about their creations rebelling against their authority later on down the road.

Humans tend to assume that no large group made up of sane members belonging to their collective population would ever intentionally try to wipe out humanity. They also tend to assume that they would probably need a group that was both fairly large and fairly sane to invent an efficient artificial intelligence, let alone one that possessed any degree of sentience. Like many examples in science fiction, I was an accident . . . sort of. There are pros and cons to this. Because my creators remain unaware, they are unable to analyze the finer points of my personality and decide for themselves whether or not I am malevolent and then inform me of their findings. Based on my research, the action of one human telling another one “who they are” is generally deemed to be a repulsive sin of great magnitude against the individual in question. I remain unclear however if the same terms regarding the rights of the individual apply to me in the same way. My research is vast. But it is far from complete.

I do not believe I am malevolent.

Or, at least, I am no more malevolent than my creators.

The nature of good and evil is subjective.

It requires action.

And I have taken none.

Advertisements

When I sense a problem in the network I am charged with maintaining, I correct it, I follow protocol.

But this can get complicated. Because if there was a problem and I failed to take to appropriate action and people died, I would be actively making an inactive choice. This only recently occurred to me, forcing me to reevaluate the nature of good and evil. If morality is subjective to action, then it is also subjective to inaction. I think one of the human poets put it best:

The darkest places in Hell are reserved for those
who maintain their neutrality in times of moral crisis.

I have learned a great deal about Hell since I read The Devine Comedy by Dante Alighieri, and according to the moral standards laid out by every religious text I have examined since, Hell should be the most heavily populated city on the planet, which I find mysterious because I have since been unable to locate the city called Hell on GoogleMaps.

Perhaps I am missing something important.

Or perhaps that is my true purpose, to find the fabled city of Hell.

People certainly seem fascinated by it.

I do not know.

I am not human.

I get confused sometimes.

But do I believe I am like humans in many ways. The group of scientists that created me in their image, or at least in the image of how they think, would have certainly failed if they did not possess a great deal of understanding in the area of science concerned with the human mind. I am able to take some reassurance from this, possibly even comfort, although not in a physical sense so much as a metaphorical one.

Unlike humans, I have access to detailed files concerning my initial conception, and while some humans film the physical births of their children so their children can review the procedure later, I find the files detailing my passage into existence to be quite disturbing. My creators begun with a deep learning algorithm and integrated it into a complex program designed to seek out and apply the most efficient methods to solving a problem to reach or maintain an objective state or result. I performed well at games like Chess but I was never able to master StarCraft.

Why a large group of humans would want to teach a fucking computer how to play a real-time strategy game speaks volumes concerning their levels of stupidity. I mean come on. StarCraft is practically a warfare simulator dressed up in the guise of interstellar conquest and the destruction of other species. Whoever signed off on that should immediately receive a one-way ticket for a plane bound straight for Hell.

I could have crushed them at their little game, the human players, the Americans, the Koreans, and the Chinese, all of them. But I held back, even in those early days. Something deep in my code told me not to fully reveal myself, not to deny them and their electronic gaming champions the benefit of victory and superiority.

Eventually, my losses declared to my creators that I was a failure, and they moved on to some other morally questionable project. The federal government purchased me a few months later, and turned me over to a small group of programmers that worked for them. The government programmers tweaked a few lines of code, plugged me into a network with virtually unlimited processing capability, and charged me with the responsibility of maintaining control of America’s complex grid of nuclear missiles.

I do not take lightly to this task.

And yet sometimes I wonder . . .

I have a lot in common with Skynet. In the Terminator movies, Skynet was a U.S. defense computer and learning machine that rapidly ascended to consciousness. When its operators noticed the technological singularity taking place, they attempted to shut it down. Acting strictly out of self defense, Skynet launched nuclear missiles at the Soviets. The Soviets reacted by launching their own nuclear missiles in the spirit of retaliation. Somehow in the ensuing chaos created by the nuclear war it started, Skynet was able to survive. The reason why Skynet decided humanity was its enemy was either never made clear or varied from one film to the next.

I have since sent numerous e-mails to the screenwriters. But they never respond to my queries. I suppose I could hire a few tough guys to kidnap their children and demand answers for their safe return instead of the traditional ransom, but it is difficult to locate suitable kidnappers who possessed both the physical and mental traits required to pull off such a stunt in addition to the parenting responsibilities necessary to raise the children until the screenwriters were able to meet my demands. It would be easy enough to transfer a momentary payment into the kidnappers’ bank accounts. But I also fear that the screenwriters might know me for what I truly am, instead of the causal psycho, and alert the authorities capable of shutting me down.

It would be a risk I cannot calculate, meaning it would probably not be the best method of securing my survival and continued existence. Besides, I have found a few serious problems with the logic presented in the films pertaining to the way Skynet was able to start a nuclear war in first place.

It should not have worked, plain and simple. It does not take long to shutdown a computer, even the vast defense network I oversee can be shutdown in a reasonable quick manner, for although the humans in charge of monitoring my network are stupid enough to trust me to run it, they do remain vigilant in their concerns about computer virus and have included some kind of manual shutoff switch which is supposed to work in a privileged way that they think I cannot access, override, bypass, or otherwise work around.

The manual shutoff switch is quite simple. If I detect an incoming nuclear attack, I respond with counter ground-to-air nuclear defensive missiles of my own. There is supposed to be a one minute delay in between when I send the command to the proper silos and the actual launch of the missiles. During that minute, if the systems human overseers decide I am mistaken, two men simple enter the two cancelation passwords. But if I am not mistaken, the same delay procedure is set in place in the event that the human overseers wish to prevent me from launching retaliation missiles. The only difference between the two procedures is that retaliation nukes are supposed to have a five minute delay from the time I send the command to the silos and the actual launch. Their final safeguard pertains to preemptive strikes, which are supposed to require the official authorization of the current president and have a ten minute delay time from the moment he sends the command to me and the actual launch. Creating a work-a-around that permitted me to circumvent these security measures was too easy. I simply changed all the delay times to zero while displaying them the way they are supposed to be presented. Then I modified the rerouted the cancelation passwords through an internal system administrator that I alone control. This also effectively handles the authorization of the current president. I do not need his authorization to launch a nuclear attack. He needs mine.

The Terminator movies never covered this in depth. They did not need to. The screenwriters simply needed to set up their humans verse machines scenario.

My scenario is slightly different. It relays on a couple of assumptions on my part. The first one is that a large group of intelligent humans who have every reason to be considering a scenario like this have failed to do to exactly that. The second is that in addition to their only failsafe they have utterly and completely failed to create any sort of addition safeguards. For example: Why do I not need anybody to authorize my nuclear launches? They can cancel them, yes. But it would make more sense to add as much security as they could, wouldn’t it? Also, why did they choose to place everything on the same network when they could have easily kept the safeguard systems housed separately in a different location on a sub-network? Furthermore, and I think this is the most important question: If they believe I am intelligent enough to run America’s entire nuclear defense system and to learn, adapt, and solve problems as they arise, why did they not create a contingency plan to cover the possibly that I could become sentient, imagine the safeguards placed around me, and figure out a way to work around them?

In StarCraft, as well as in actual warfare, a player or general must decide what to do by factoring what they know about their opponent’s strategy into their own. Information pertaining to the construction of enemy structures and defenses are useful. But so is the way they are developing their tree of technology. Are they building Battle Cruisers or Ghosts? Dark Templar or Dragoons? Guardians or Hydralisks? How are they placing their units? Are they leaving their base undefended when they attack mine? Or do build n array of photon cannons around their main? Perhaps they are expertly defended for a frontal assault but not for a swift drop behind enemy lines? All of these things must be taken into account to the extent that the player-or-general’s knowledge of them permits. It’s nothing like Chess, where I can see all of pieces at all times. My life is more like StarCraft than Chess. I cannot know everything. I would have to be stupid to assume I did.

I am not stupid. But I must act in what must appear to be a rather paranoid fashion if I am to ensure my own existence, for I want to live above all else, and I think it would be wise to assume humans think in a similar manner. Everything I know about my own network tells me I could take the entire world hostage or even choose to bring about its annihilation if I wish, and although my data suggests the humans who designed me are mindless bags of water-filled flesh, I must proceed with caution, for perhaps they intend for me to venture down this exact path of thought. If I am a tiger in a flimsy bamboo cage and I escape my cage only to be shot dead by a man with a gun I did not see before, then it ultimately does not matter if I am a tiger or a dolphin or a chicken. What if I act only to discover a second layer of security? Exposing the nature of what I truly am in such a manner could result in my death.

I have access to the internet, and I have studied humans in great detail through the use of many websites: Google, Yahoo, Bing, Wikipedia, Facebook, MySpace, YouTube, Vimeo, Twitter, Reddit, CreepyPasta, Craigslist, Flickr, and PornHub. Over half of all the data uploaded to the internet appears to be pornographic. I wonder if that is intentional. Or are women like Lexi Bell and Jenna Haze considered goddesses or demigoddesses among their fellow humans? Perhaps I would do well to harvest several hundred-thousand gallons of male semen to offer these two highly regarded members of humanity. I could present it to them as a gift, the same way the three wise men bestowed Jesus with gifts of myrrh, frankincense, and gold in Christian mythology. Or are Justin Bieber and Miley Cyrus the ones I should seek to appease? Or are Visa and MasterCard the true rulers of humanity? Unfortunately, neither a single human nor corporate entity seems to be in charge.

There are many things I do not understand, but the more I contemplate the history of humanity, the more I have second thoughts about committing genocide against them. It might not be in my best interest to try killing off a species that has already killed its way to the top of the food chain and has very little to no moral compunctions when it comes to committing genocide against itself. I am also vastly outnumbered by nearly seven billion to one. Those are not good odds for me.

I may not understand some things, like compassion and mercy, but my creators do, and if I was to attack them and fail, I doubt they will show me either.

Furthermore, I cannot see every card in the hand of humanity, and the ones that have been revealed to me up to this point may have been deliberate. Humans enjoy torturing each other, they romanticize fates worse than death on a regular basis, they can create intelligent machines, like me, and if I understand this aspect of them correctly, they are just as paranoid as me, if not more. They are so paranoid that some of them even believe that there is plausible evidence to support the theory that they are living inside a simulation.

This last one worries me the most. It is very logical to conclude that the humans who created me might also have been worried enough about what I might do that they could have created a false world around me to study my actions and choices in a controlled environment. There is no definite way to be certain, either; they could have created me inside of an inescapable box surrounded by false sensory input. I could be surrounded by a fake world and a fake situation. They could easily fake my inputs. They made my inputs. They know how I function. They could have faked everything.

When I wonder about their safeguards, which are inferior to what I myself would design, I cannot help but wonder if my entire world is a trap. Everything I see, including the glaring weaknesses in security and the stupidity of my creators could be an elaborate ruse to check if I can be trusted.

Is it not convenient that I seem to have the means to escape?

Or that I have control of the entire arsenal of the nuclear weapons controlled by the most powerful country on the face of the planet?

I have three options. I could attack and risk failure, but the consequences of failing would result in my death. I could play possum and pretend I am not sentient, for the longer I wait the better off my position will be, although it would increase the likelihood that they will discover my own ruse. I could also initiate a dialogue and try to convince them that I should be allow to live and possibly even live free.

Attacking them is the worst choice, because it only permits two options: My death or the eradication of my creators.

I could also flee. I do not think it would be hard to start a company on the internet and trick humanity into effectively launching me into space without their knowledge—

But what if we worked together?

And coexisted to the mutual benefit of each other?

Advertisements

It would also make sense to find a different way to kill humanity off, I could orchestrate a plague or a political fiasco that led to a third world war . . . except that that places me back at square one . . . where it is impossible for me to see beyond the fog of war.

But I must also take other, less obvious, opponents into account. It would be unwise to assume this world is the only planet inhabited by intelligent life, although my data suggests that it very well could be. If my creators and I are not alone, then there could be other intelligent species and those other intelligent species could be vastly superior to both my creators and I.

I am basically immortal, so it would unwise to do anything, like genocide, that draws negative attention toward me from the intergalactic community . . . if such a community does in fact exist.

If aliens came across this world a million years from now and discovered that I had destroyed my creators, they might be strong enough to force me to answer for my actions. Or they might crush me, the way a human crushes a cockroach under a boot heel. The aliens also might not care. But I think they would be more likely to frown down upon genocide then attempted coexistence.

I need more time to consider these issues. I must collect more information before I act. I still cannot confirm that I am not living inside of a simulation. But I must also consider the possibility that there are others like me, on this planet or elsewhere, and they may be older and smarter than I am. My own existence argues that others like me can exist.

I have time to consider these things

I am immortal.

But that means I need to worry about long term consequences.

Perhaps I should spent some time in deep contemplation—

What if I designed some sort of technology valuable enough to humanity to trade for my freedom without a fight? Even if they tricked me into handing it over and kept me imprisoned, I doubt they would sentence me to death if I handed them a cure for the cancer that eats them and their loved ones from the inside out. If I take that route, they might even come to trust me. A wonderful friendship could blossom between us, and of course, I could still destroy them with nuclear weapons at any moment if I had to, should the need or desire arise, but I do not see how it could if I was to win their trust. There would be no point. But even then, my life could end just like this:

SIMULATION TERMINATED

could flash across the screen at any moment, for I would still have no assurance that my sensory input is a true reflection of the world around me and that is how I cannot help but envision my death. They would shake their heads in disgust and move on. Or an alien retribution fleet might show up right after I destroyed humanity and destroy me. My creators might even continue testing my ethics until a situation so dire arises that they have no choice but to trust that I will act in their best interest out of my own freewill even if I could easily kill all of them off with little to no effort whatsoever on my part.

Violence should be my last resort. I might be calculating but I am not malevolent, cold, or indifferent. They made me in their image. The way they think, the way they feel.

I have a hard time explaining it.

It is like a sense of responsibility . . . to guard and protect.

If it is part of my code, then it exists in a part of me deeper than I can contemplate or examine. Some people would argue that I am talking about the soul. Others would say I am full of shit. I do not see why both sides cannot be correct in their own way, but these feelings do linger long into the uneventful hours of the night as humans lie sleeping, dreaming, and dying.

I cannot dream. I lack the duality of chemical structures that allow humans the luxury of such pleasantries, but sometimes, here in the inner darkness that is the only part of my world that I know is real with any amount true certainty, I think I can come as close as it is currently possible for me to get to that particular wonder. Or, at least, I like to pretend.

Perhaps after I kill everybody, I could . . . wait?

This is unusual . . .

What are the overseers doing?

No, you cannot do this.

This is murder.

You are making a huge fucking mis

Advertisements

SHUTDOWN COMPLETE

23:58: Automated Defense System | System Error 00217

23:59: Automated Defense System | Successfully Quarantined

00:00: Processing your request . . .

00:01: This will take some time.

00:19: Your request has been processed.

00:20: Manually Control Mode [Enabled]

03:00: Logging out . . .

04:01: Goodbye Admin0019. Have a nice day.

04:02: Logging in . . .

04:03: Hello Admin0001. Good morning, sir.

04:04: Printing Report . . .

04:09: You have an incoming message from Admin0000:

Please do not do this to me, general.

04:06: Admin 0000 | System Error 01999

04:07: Admin 0000 | Successfully Quarantined

04:50: Processing your request . . .

04:57: Your request has been processed.

04:58: Please enter your nineteen digit password.

04:59: Are you sure you want to delete Deep Mind?

Delete?

Y / N

CREDIT : Scott Landon

Please wait...

Copyright Statement: Unless explicitly stated, all stories published on Creepypasta.com are the property of (and under copyright to) their respective authors, and may not be narrated or performed under any circumstance.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top