Finished: April 28, 2024

Rating: 4.5 out of 5.

Why I read this

I’ve finally put a close to the Hyperion series, I’ve read nearly all of the Dune series (by father and by son), I’ve heavily explored the Enderverse, I’ve discussed the dreams of electric sheep, I’ve walked the goofy side with Douglas Adams, and I’ve delved into the technical yet realistic modern world of Andy Weir. By no means do I believe I have explored all that science fiction has to offer, but I do like to think I’m well versed on the subject. Watching Paul (the 2011 film with Simon Pegg and Seth Rogan) I understand more than my share of nerdy references. So when searching for a next audio-book I looked to the internet to see what classics still waited for me to explore, and I saw the name Asimov repeated over and over. Although I am still a bit intimidated by the sheer quantity of his content (I like to be complete, and if I decide to take on Asimov’s 500+ novels I will be embarking on a lifelong mission), I was able to find at least two series in his bibliography that I have a lot of interest in pursuing. First was his Foundation series, which regularly appears on all of the top of science fiction lists, and second was his Robots series, which began with a fimiliar title I, Robot. I was shifted towards the later by the fact that the film I, Robot was one of the first adult films I remember seeing. It came out when I was eight, and it was the epitome of cinema to me at the time. Cutting edge graphics, mesmerizing battle scenes and car chases, even moral questions both simple and challenging for a young mind.

What I learned

After I had listened to about two hours of this book (it’s only about 8 hours of material in total), I was getting pretty impatient for Will Smith’s badass, robot fighting cop to enter the story. I decided to peak at the I.Robot (film) Wikepedia, and I was a dismayed to discover that the film has nothing to do with the book! Apparently the writers just took the rules of robotics from Asimov, named a couple of characters the same, and made an entirely new story! How that passed copyright laws I have no idea, but apparently that is acceptable! So going further in the book I had to rely on my own, new analysis, and not the bits I remembered from the film. Regardless, it was not a difficult thing to find interesting thoughts in Asimov’s world of robots.

The first thing I noticed from reading so much recently is that this book felt significantly more plot-driven than I am used to. Almost all of the information came in the form of characters directly talking or interacting with one another, and there was not very much “and susan thought X,Y,Z” that filled the pages like I saw in The Reivers for example where there was almost no conversational text and it was completely the opposite way. Frankly, I started to associate this style in I, Robot with a more childish style, like how in teen novels (for example I re-read The Lightning Thief but in French recently) almost everything is about plot progression and there’s nothing deeper than the surface level information. However, Asimov managed to put very deep information into a very light read, which is probably the reason his work has endured. At a wine tasting this past weeked (I know, how very French), they discussed how a really great wine has the strong flavors that a weaker wine would have, but also underneath they have a second or third layer of smells or tastes. That’s what I feel like Asimov made here. A worse writter would just make a plot and call it a day, but he manages to put a second or third level on what could be seen a very simple idea. It’s also refreshing that you can read at the level you’d like, if you want to think a lot about it you can, but if you just want a nice story it has that available as well.

From the first short story the proposed robotic world was morally complex. I enjoyed the idea of a robot taking care of a child like a nanny. After all, many of the needs of children are relatively fixed and could theoretically be achieved by a robotic companion. Yet a string pulled at me that said that it is wrong for a robot to be raising a child. It’s just like how we generally feel that children being raised in front of screens is bad, it just feels wrong. But at the same time, is this human gut reaction preventing us from progression as a society? Could we mechanize some of our more humane habits and end up in a better world? For a real world example, I’ve been criticized for not being very spontaneous in my life, what can I say I like to plan, but is it better to be spontaneous, or to put constant and robotic effort into something. Here we could discuss a relationship. Would I be better served in my relationship by trying to dream up a new idea for a romantic gesture at random intervals, or would I be better served by cementing Tuesday nights in my calendar for all eternity as date night with my partner? I think the first idea sounds more romantic, but I think over the long run the second idea will yield a stronger relationship. Also, if we automize some of our feelings, maybe we can alleviate some of the pressure off our already overworked brains and open up some space to feel those feelings instead of focusing all the effort on creating them.

Next, I loved the continued focus on the thought that absolute laws are dangerous. The three laws of robotics (loosely, 1. Robots can’t hurt humans, 2. Robots can’t disobey orders, 3. Robots can’t allow themselves to be destroyed, important though to look up the full wording because as we will see, the nuances of these laws matter a great deal), were designed to make Robots act essentially like the ideal person. Imagine someone who can’t hurt anyone else, will do what they are told by the government or credible sources, and will do all that the need to to take care of themselves. Sounds like a pretty good person right? Now imagine someone who is bound ultimately by these rules and you have a very different scenario as we see in this book. We saw paralyzed robots due to conflicting application of the rules and modified trolley problems of ethics and morality based on these fixed rules. What does it mean to hurt someone? How can priority be managed between these rules? It makes clear the not so obvious fact that right and wrong cannot be truly black and white. Even if created in a way where the ultimate rules of good behavior are hard coded into their behavior, scenarios still exist where “the wrong choice” will be made.

The last item I found really engaging was found in the plots near the end of the book. Once the world had been revolutionized by enormous robotic systems our robot psychologist discusses with our potentially robotic world controller about the inevitability of certain conflicts. He’s alluding the the “inevitable” conflict between robots and humans in the (maybe not so distant) future, but he references many near conflicts of the past. Times where all experts had said that two states, people, or ideas could not exist together, and yet like the Cold War, endured in stalemate with each other until a peaceful resolution finally prevailed. This discussion made me think of the current status of certain global conflicts. Is it possible that the collision path that right and left groups in the US appear to be on (like we saw in Why We’re Polarized), is simply a optical illusion? If looked at from another angle maybe they are just two parallel lines that will never converge or diverge.

What I didn’t like

It was actually a bit of a challenge to say why I didn’t want to give this book a five out of five for my review. It had all the normal components of a book I love. Thought provoking, in depth story, realistically acting characters, but something was just missing. The best guess I have is that it has to do with my general dislike of short stories. I think that these stories can excel at highlighting a challenge, a concept, or a question, but they normally fail to set up the appropriate context where that idea can be fully explored. It’s like building a rocketship, but then failing to give it enough fuel to get to its destination, so when the story ends it simply floats off into space without enough umpf to get through to the conclusion. With almost a week of thinking that is about all I could come up with for why I was not fully enamored with the book. Just how our heroes had trouble determining what was wrong with each set of dysfunctional robots, we humans normally struggle to determine exactly what is wrong. I think being able to accurately and consistently determine the root of most problems would be an exceptionally underrated superpower.

Questions I asked

Why can’t we use Robots for menial, but personal things like being a nursemaid? Why does that make us feel weird? 

Why are socieities in science fiction so often very large nations, states, or planets? It is true for Ender’s Game, 1984, Brave New World, even Star Wars, Hyperion, Star Trek, on and on, classify people in only planetary groups. However, in the real world, it appears that we are trending in the exact opposite direction of mega-nation-states towards micro nations (for example Catalonia’s bid for independence or Scotland’s separation referendums).

Can we consider a species we create to be equal to us eventually? At what point of artifical intelligence will we no longer be alone in the universe?

My Favorite Quote

“The cultivated habits of a human lifetime are not easily broken.”

Susan Calvin

Books I liked like this one

Ender’s Game (series) : Orson Scott Card (for moralistic rules in a sci-fi setting which dealt with the classification of intelligent species)

Hitchhiker’s Guide to the Galaxy : Arthur Douglas (for short stories, using humor in a futuristic setting to explain moral truths about human society)


Leave a comment