AN: This was a little idea I had a while back. I don't know if it's been done or it's cliche (probably, haha), but it started when I did a presentation on the uncanny valley and what it means to be sentient (it was for a philosophy class, so you'll excuse me if i throw some of those concepts around and it gets a bit...heady). Basically, Arthur is a robot who is programmed to have human emotions and falls for his main programmer/designer, Alfred. I will warn you, though I'm not going to give away the entire plot I have in mind, it will get a bit dark in later chapters. That being said, this is a short introductory chapter, so...I hope you like it.
Alfred F. Jones, Ph.D
77 Massachusetts Avenue
Cambridge, MA, 02139
(617) 253-1000
Dear Mr. Clark,
In regards to your recent letter concerning the progress of my research, I have written to cordially invite you to the Massachusetts Institute of Technology, the school of engineering in order to view the work I have been doing first hand. As you know, the robotics lab I head is currently working on programming a perfectly sentient robot. Over the past two years we have made incredible breakthroughs, though I understand your concerns. What exactly is the grant money going into, you may be asking. In this letter, I will briefly explain the concept:
Allow me to be the first to introduce you to our Artifical Reasoning, Thinking, and Humor Emulation Robot, or ARTHER as we call him in the lab. This has been our main project for a little over a year and a half now. Similarly to IBM's famous Watson, he uses a databank and the internet to gather information and cross check it in mere seconds. However, with ARTHER, we have decided to go a step beyond simple data collection in order to come up with facts or correct answers. It is the addition of a personality, a humor, which makes this robot unique from other AIs. We strive to create a realistic set of emotions that can be created spontaneously by the robot, utilizing the idea of cross-checking data and learning from interactions in the past. In this way, the more the robot experiences, the more it learns and the more "human" the conversations and emotions of the robot become.
It took us over half of a year to finish all of the correct programming, and while we are still fixing some issues and bugs, we have had nearly a year to converse with ARTHER and watch its ability to converse slowly increase. It has only been recently that we have noticed the emotions of the robot coming into play; ARTHER has shown signs of more primal, basic emotions such as anger, happiness, sadness, and on one occasion, even fear. We hope to, in the future, evoke emotions as complex as confusion, curiosity, jealousy, or even love (to just give a few examples of the vast palette of human emotions available).
I could not possibly explain or demonstrate the intricacies of ARTHER through writing. It would be much appreciated if you would do me the honor of accepting my invitation to visit my lab and view the progress that has been made over the past two years.
Sincerely,
Alfred F. Jones, Ph.D
Alfred hated writing these things. They always sounded so formal and tense and…stuffy. Sometimes he forgot he was a professional adult who had to deal with other professional adults. Even after obtaining both a Masters and a Ph.D in robotics engineering, he still liked to think of himself as a little kid who just tinkered with things until he made something that worked. That's what really drove him, if he was going to be honest. It was just fun to create things, especially if the end result worked (though sometimes he preferred if it failed at least once or twice; who doesn't like a good puzzle, after all?).
That's why ARTHER hadn't gotten to be the least bit boring, even after two years of working on the robot. It was inherently designed to keep Alfred entertained and busied; it was a robot that was learning more every day and continually growing more complex. There was a time, about a month ago, where ARTHER nearly broke down completely (mechanically, not emotionally) and almost lost all of his learning experience. Alfred himself had nearly broken down at that point, as well (emotionally, not mechanically). But most of the information was saved and ARTHER was alright in the end.
That was the incident that it had expressed fear, and when Alfred truly saw it was starting to learn more and more. Had ARTHER truly understood he might have "died" back then…? Logic says it was just the data collection, that he read articles online about "crashes" and "bugs" and "memory loss" and saw it was associated with "fear" which he then researched and replicated (all in a matter of a few seconds).
But it's hard to tell if it was just programs running in a machine when you look into glass eyes and actually, truly see a flash of fear. He'd never seen it again since that moment, so who knows?
It was a little while later that Alfred received the letter from Mr. Clark, the man heading the grant fund for all of Alfred's research, expressing interest in seeing what the lab had accomplished. Alfred, of course, knew this was a matter of his lab's fate; to continue the project or lose all money and scrap ARTHER.
Alfred truly didn't want to scrap ARTHER. And, seeing as he was beginning to be able to converse regularly and could move around in rudimentary steps, Alfred figured it was as good a time as any to demonstrate what ARTHER could do. Besides, from the way Mr. Clark talked, there was no way a simple written report of progress would be enough to continue the funds for an expensive project such as this. So, instead, he decided to invite him over to see for himself.
Alfred felt a little bad, but he was kind of hoping he would get to see more of those little emotional flare-ups from ARTHER. Perhaps nervousness when meeting Mr. Clark, pride after doing a good job, or even stress if he thought he didn't do well. A negative emotion was still a valuable result, though it wasn't something Alfred preferred to see. But the point of ARTHER was not to create a robot that was happy all the time; it was to create an android who could replicate a human being. So, while he hoped ARTHER could lead a happy android life, he was determined not to grow attached to any personality that developed or anything, because eventually the research would end and eventually ARTHER would have to be shut down.
Just because humans could replicate robots, doesn't mean they should. Or, perhaps ARTHER would serve as proof that at this point in time, androids are an impossibility. Either way, it ended with ARTHER being shut down. Alfred would make sure to keep the research going as long as possible, but he also made sure to remain friendly but detached from ARTHER. He just hoped he never got human enough to notice the detachment.
Thank you very much for reading. Please leave a review if you enjoyed it!
