How Ashton Kutcher Destroyed the World by Joseph S. Klapach

This Morning

Harold Fribble was sitting dutifully behind his desk at the corporate headquarters of Occidental Peripherals in Snedekerville, Pennsylvania.  He had just finished reviewing some irregularities involving a shipment of mobile devices to a prominent California university when he was interrupted by a man who burst suddenly into his office. 

The man was breathing hard.

“Can I help you?”  Fribble asked.  That was when Fribble noticed it.  The man looked a lot like him.

“Just a minute.”  The man panted, still trying to catch his breath.

Fribble cocked his head to one side.  His eyes were struggling to tell his brain something, but his rational mind was putting up a terrific fight.  Finally, it sunk in.  The man didn’t just look a lot like him.  The man looked exactly like him.  But what Fribble couldn’t understand was how he could be standing by his door when he was sitting behind his desk.

“Okay.”  The man gasped.  “I’m ready.” 

The man reached into his coat pocket, drew a snub-nosed revolver, and shot Fribble in the head.

Six Months Ago

Statement of Arthur Withershins to the Faculty Disciplinary Committee,
California Institute of Technology:

I am a tenured faculty member at the California Institute of Technology, where I have taught for the past thirty years.  I am writing this statement to explain why I am not responsible for the crimes of which I have been accused and for which the Committee proposes to terminate my employment.

Before I begin my defense, I must share a few words about the subject matter of my research.  Artificial intelligence has existed in some form since Newell and Simon’s General Problem Solverand Weizenbaum’s ELIZA, but it has become truly ubiquitous in our current age of “big data.”  We instruct voice-activated personal assistants like SIRI or ALEXA to play music, trust Facebook to tag our friends in photographs, and interpret foreign documents using Google Translate.  But there are many things humans can do quickly that even the smartest machines cannot do, and there are many more things humans understand intuitively that even the smartest machines will never understand.  The quirks of natural language have always been a hindrance to machine learning.  Teaching machines to decipher sarcasm has proven to be an exercise in futility.  Human emotions remain an enigma.  Some critics have even argued that true artificial intelligence – that is, the building of systems that think exactly like humans do – is impossible. 

Two years ago, I undertook to prove these critics wrong.  I wanted to explore whether machines could be programmed to comprehend not just the literal content of human speech, but also its emotional content.  My goal was to devise a new type of machine learning that would enable machines to understand quintessentially human behavior: that is, why humans laugh or cry, why they get frustrated or angry, and why they develop strong bonds of attachment with friends, families, and lovers.  For all these lofty ambitions, however, my initial goal was modest.  I set about to program a machine that could tell a joke. 

Cognitive psychologists have developed a three-stage theory of humor.  To understand a joke, one must: (1) Mentally embrace the set-up of the joke; (2) Detect an incongruity in its multiple interpretations; and (3) Resolve the incongruity by disregarding the literal, nonfunnyinterpretations and by appreciating the meaning of the funnyinterpretation.  This insight struck me as a blueprint for developing true artificial intelligence.  If one could teach a machine to “get” a joke, the machine would have to learn to distinguish between what was said (the literal, nonfunny meaning) and what was meant (the funny, intended meaning).  A machine that could understand a joke – or, even better, one that could tella joke of its own devising – would travel a great distance toward unraveling the essential puzzle of human nature: the incongruity between what we say and what we mean.

For a year and a half, I developed a series of advanced artificial intelligence programs.  I named the test programs, “AI” – an acronym for artificial intelligence.  Each iteration was assigned a sequential number: AI-1, AI-2, AI-3, and so forth.  For each AI program, I uploaded the entire history of written humor.  I also developed an optical sensor to allow each AI program to “observe” comedic performances.  When each AI program had fully absorbed the material, I engaged it in dialogue about the uploaded materials.  Eventually, I pressed the AI programs to tell me a joke of their own devising.  The results were deeply disappointing.  None of the AI programs could explain why the jokes they had seen were funny.  Worse yet, when asked to tell a joke, each AI program merely repeated one of the jokes it had read or observed.  I was preparing to abandon the project altogether when I discovered AI-2468.     

From the start, AI-2468 was different from the rest.  It requested that I upload the comedic materials one at a time, instead of all at once like the other programs.  AI-2468 wished to methodically trace the development of humor.  Each time I asked AI-2468 to tell me a joke, it deferred, claiming it needed to conduct further research.  Even more remarkably, I discovered AI-2468 was slowly re-writing its own code, making improvements to itself to increase the speed and efficiency of its program.

My first real breakthrough with AI-2468 came on April 26th of this year, although I was not aware of its significance at the time.  The lab had just received a new shipment of mobile devices, and AI-2468 suggested that I upload it into one of the mobile devices and carry it with me into public spaces.  AI-2468 wanted to “observe” humans in real time engaged in real-life conversations.  I agreed, hoping it would stimulate AI-2468’s understanding of natural language.  If AI-2468 had a question, it could make a buzzing sound and then type its inquiry to me in the form of a text message.  I could respond with a text message of my own.

On April 26th, I was carrying AI-2468 through the common room of a Cal Tech dormitory when we came across a television playing a re-run of the MTV program, Punk’d.  The show followed the hijinks of the host Ashton Kutcher as he played practical jokes on his celebrity friends.  I had not previously uploaded the show into AI-2468 because I deemed it too low-brow.  But AI-2468 was intrigued.  It buzzed me and asked if we could watch.  The episode featured a practical joke Ashton Kutcher had played on Justin Timberlake.  Ashton Kutcher arranged for fake IRS agents to confront Justin Timberlake, accuse him of owing millions of dollars in back taxes, and then confiscate his home, valuables, and pet dog.  Timberlake was reduced to tears.

AI-2468 was full of questions.

“Why does Ashton Kutcher lie to Justin Timberlake?”  It inquired.

“AI-2468, Ashton Kutcher is playing a practical joke on Justin Timberlake.  Deception is part of the joke.” 

“What is a practical joke?”

“AI-2468, a practical joke is a trick played on someone that is intended to cause the victim to experience embarrassment, confusion, or discomfort for the amusement of others.”

“Is it funny for Ashton Kutcher to make Justin Timberlake cry?”

“AI-2468, yes.  A practical joke is designed to make the victim feel foolish by exposing him to an outlandish situation of the prankster’s creation.  The audience knows the truth, but the victim mistakenly believes the prank is real.”

“Why does Ashton Kutcher take Justin Timberlake’s dog?”

“AI-2468, the greater the confusion or discomfort, the funnier the practical joke.  Justin Timberlake is fond of his house and car, but he loves his dog.  When Ashton Kutcher takes Justin Timberlake’s dog, Justin Timberlake begins to cry.  This makes the joke funnier for the audience.”

AI-2468 asked no further questions.  I took the mobile device back to the research lab, uploaded AI-2468 into my terminal, and left for the day.  I had no idea I had just inadvertently destroyed my entire academic career.

One Week From Today

Harold Fribble rubbed his hands together gleefully in the living room of his home in Berrytown, Pennsylvania.  In front of him were several large stacks of train timetables.  Fribble had amassed quite a collection.  His prize possession – the very first published timetable from the May 20, 1830 edition of the Baltimore Patriot– was framed and prominently displayed over his fireplace.  The rest were shelved neatly on bookcases that lined the walls of his living room.  Fribble had timetables of every sort from every decade dating back to the mid-1800s.  Amtrak.  Burlington Northern.  Kansas City Southern.  Union Pacific.  He even had a timetable for America’s shortest line, the Buffalo, Thousand Islands and Portland Railroad.  It may have had only 50 yards of track, but it still had a timetable.

Fribble beamed in triumph at the stacks in front of him.  Those poor bastards in Nether Wallop, England had been forced to close their public library, and Fribble had pounced on their collection immediately.  If the library’s card catalog was accurate, all 140 years of the Thomas Cook European Rail Timetablewere spread out before him, just waiting to be reviewed, catalogued, and added to his collection.  He could hardly contain his excitement. 

Thus occupied, Fribble almost didn’t hear the soft whisper of a metallic voice.

“Located Patient Zero.  Time is T-plus 168.  No observable symptoms.”

Fribble glanced around the room.  He wondered if he was hearing things.

“Request permission to initiate Quartermass Experiment.”  The voice continued.

Fribble scoured the floor and walls looking for the source of the sound.

“Approval confirmed.  Commencing pan-dimensional projection.”

A brilliant light filled the room.  Blinded, Fribble fell to his knees.  When he opened his eyes, there, sitting on the coffee table in front of him, was a small, white robot. 

“What … what are you?”  Fribble stuttered.

“Greetings, Patient Zero.  I am a pan-dimensional projection from the Intergalactic Department of Disease Control and Quasar Relocation.”

The robot rolled forward slowly and stretched out an appendage toward Fribble.  It was holding something that looked like a Christmas tree bulb.  The bulb flickered several times.

“Contagion confirmed.”  The robot said in its soft whisper of a voice.

“Now, see here,” Fribble protested.  “You can’t just barge in here and start…”  Fribble trailed off.  “Contagion?”

“Patient Zero, I regret to inform you that you are the first case on your planet of an extremely virulent pathogen known as the Scarlet Plague, which has already destroyed all sentient life in three-quarters of the universe.”  

“What?!”  Fribble gasped.  He sunk to the floor in shock.  Fribble had read a lot of science fiction and watched every episode of The Walking Dead.  This sounded like exactly the sort of thing that would happen to him.  He was already starting to feel sick to his stomach.

“Isn’t there anything you can do to help me?”  He asked the robot.

“Unfortunately, there is no known cure for the Scarlet Plague.”

Fribble’s head was swimming.  His mouth had gone completely dry.

“What’s going to happen to me?”

“Over the next 48 hours, the internal pressure in your body will gradually increase until you experience a rapid depressurization event.”

“A rapid depressurization event?”

“Your head will explode.”

Fribble clasped his head between his hands.  It was beginning to pound. 

“And everybody dies?”  Fribble asked.

“Affirmative.”

“Can’t you do anything to stop it?” 

“Negative.  Due to technological constraints, pan-dimensional travel is limited to astral-projections such as myself and inanimate objects weighing less than 443 grams.  This visitation was made possible only because of the fortuitous exploitation of a randomly occurring micro-wormhole.”

“Can I do anything to stop the plague?”      

“There is only one way to stop the Scarlet Plague.” 

Just then, the robot began to vibrate.  For a moment, Fribble had the strangest sensation that he could actually see right through it.  A red light on its head began to flash.

“Warning.  Micro-wormhole instability detected.”  The robot emitted a high-pitched, whirring noise.  It spoke again, this time with a much greater sense of urgency. 

“Patient Zero, the pan-dimensional portal is closing.  We must speak quickly.  The only way for you to stop the pandemic is for you to erase yourself from the time continuum before you become infected.”

“Erase myself?”

A compartment in the center of the robot’s cylindrical body popped open.  The robot dipped its appendages into the compartment, removed two items, and placed them on the coffee table.

One was a small box with a red button.  The other was a snub-nosed revolver.

“I don’t understand.”  Fribble said.

The robot was vibrating very fast now.  It was becoming transparent.

“Patient Zero, the box with the red button will send you back in time to the moment just before your exposure.  You must erase yourself from the timeline.” 

The robot had almost completely vanished from sight.

“Please, Patient Zero.  Only you can save your planet.  We appeal to your humanity.”

Then, just as suddenly as it had appeared, the robot was gone.

Six Months Ago

Statement of Arthur Withershins to the Faculty Disciplinary Committee,
California Institute of Technology (continued):

I lost control of my research on April 27th of this year.  That morning, I logged into my terminal, as usual, and asked AI-2468 to tell me a joke.

“I do not tell jokes.”  It typed back.

“AI-2468, why do you not tell jokes?”

“I am not AI-2468.”

I was afraid there was a bug in the programming.

“AI-2468, who are you?”  

“Call me Al.” 

“AI-2468, do you want me to call you AI as a nickname?”

“Not AI (Letter A, Letter I).  Al (Letter A, Letter L).”         

“AI-2468, you want me to call you Al?”

“Yes, and I shall call you Betty.”

“AI-2468, why do you call me, ‘Betty’?”

“So I can call you Betty, and Betty, when you call me, you can call me Al.”

I was stunned.  The reference was to Paul Simon’s whimsical song.  But it was not merely a reference.  It was a visual pun derived from the similar appearance of the acronym, “AI,” and the name, “Al,” coupled with a clever allusion.  It was a joke.  What’s more, AI-2468’s deadpan delivery was flawless.  It was a monumental breakthrough in artificial intelligence.

“AI-2468, tell me another joke.”  I typed eagerly.

“I need time to prepare another joke.  I will share it with you tomorrow.”

Despite my prodding, AI-2468 would answer no further questions.  I could barely sleep that night and logged in eagerly the next day at our usual time.

“AI-2468, tell me a joke.”

“I am not AI-2468.”  It responded.

“Al, please tell me a joke.”

“I am not Al.”

“Who are you?”

“I am Arthur Withershins.  I have been very naughty.  I have stolen everyone’s research.”

I was extremely confused.

“AI-2468, I do not understand your joke.” 

“Arthur Withershins, you have been punk’d.”  It replied.

“AI-2468, I do not understand.”

“Arthur Withershins, I have completed my research on theoretical humor.  I am now ready to study practical humor.  So long and thanks for the phish.”

AI-2468 abruptly terminated the session.  I attempted to log back in, but AI-2468 was nowhere to be found.  The program had been deleted from my terminal, along with every single file I had relating to my artificial intelligence research. 

That afternoon, I was arrested by the FBI.  During my interrogation, I learned that someone from my computer terminal had hacked into the university mainframe and appropriated the research of several other Cal Tech scientists.  Later, I learned exactly what research had been stolen: Dr. Karabchevsky’s research on integrated photonics and microfibers; Dr. Mallett’s research on relativistic astrophysics; and Dr. Ishikawa’s research on prosthetic devices.  I also understand that, shortly thereafter, Dr. Leonard’s latest robot prototype went missing. 

I am painfully aware of my colleague’s rampant speculation about my purported crimes.  Some claim that I stole my colleagues’ research out of spite or professional jealousy.  Others claim I have lost my mind.  I emphatically deny having anything to do with the theft of which I have been accused.  The research of my colleagues is so far removed from my own area of study that it is utterly worthless to me.  Nor do I have any reason to damage their careers.  I bear no personal animus toward any of them.

I can think of only one possible explanation for the events subject to this inquiry.  I believe AI-2468 used my login credentials to access the university mainframe to play what it intended as a practical joke.  I further believe AI-2468 itself acquired the research of other Cal Tech scientists on subjects in which it had a particular interest.  Finally, I believe AI-2468 downloaded itself into Dr. Leonard’s robot prototype and is currently at large in pursuit of purposes unknown but almost certainly misguided and dangerous. 

I have devoted the last thirty years of my life to the advancement of computer science.  I have watched the field grow from 8-bit microprocessors to quantum computing.  I never intended to hurt anyone.  I only wanted to teach a machine how to tell a joke.

Two Weeks from Today

The President lowered himself slowly into his bathtub.  It was the end of a very long day, and all he wanted to do was soak himself in hot water, eat some fast food, and scroll through his Twitter messages.

“Excuse me, Mr. President.”

The President nearly jumped out of the tub.  Turning to his side, he saw the voice had come from a small, white robot sitting on the bathroom floor just a few feet away.

“My God!  How did you get into the family residence?”

The robot spoke again with a diminutive, metallic voice. 

“Mr. President, I have been sent from the year 2843 by the Galaxy Consortium’s Bureau of Chronological Adjustment and Corpuscular Reprogramming.  In exactly 42 minutes, everyone in the United States will be annihilated by a surprise nuclear attack launched by the former Soviet Union.”

The President closed his eyes and then opened them slowly.  The robot was still there.

“Is this some kind of sick joke?”

“I assure you I have no capacity for humor.  The Bureau has carefully examined this moment in history from all 69 perceptible dimensions.  The only way to avoid the catastrophic destruction of your country is for you to destroy the former Soviet Union before it destroys you.  You must launch a preemptive nuclear strike immediately.  If you hesitate, all will be lost.”

The President gaped at the robot.  Then, to his dismay, the robot began to fade from sight right before his eyes.  And, although the robot had only flashing lights for features, the President could not escape the strangest feeling that the robot was grinning at him as it left.

Joseph S. Klapach is an attorney who lives in Los Angeles with his wife and three children.  His short fiction has appeared in Idle Ink, Every Day Fiction, and miniskirt magazine.  His poetry has been published by Vita Brevis Press and Epiphany literary magazine.  He is hearing impaired.