Chat Bots, a Furby, and Going Easy On Yourself

I listened to a podcast on Radio Lab this weekend while I wrestled with my most recent painting. The podcast started with a live audience experiencing chat bots and trying to determine if they where "chatting" with a human or a bot. AI has come a long way and the purpose of the bots is to seamlessly hand off control from the "machine" doing the chatting to a "human" as the problem gets more complex or subject specific. As the podcast proceeded there was an interesting antidote about a Furby. Using the emotional response that a Furby has programed into its system, the tester thought that the same type of response could be used to detect a bot. That is a huge generalization, and it was an interesting segment as they interviewed a man that designed toys for a living.

What I want to get to is the segment where a reporter goes to interview a scientist and participate in a "virtual" session where he moves back and forth between himself and himself as a virtual Sigmund Freud, and himself as his virtual model. He was asked to think of a problem he was trying to solve and ask "Freud" for his advice. The problem he was mulling over was whether or not he did the right thing about leaving his mother, with dementia, in a facility in VA where she has many friends to visit her, or to move her to NY, where it was only him.

He asked his question as himself. The virtual room changes and he finds himself sitting as the virtual Freud, voice changed. He then becomes a Freud-like version of himself. As he sees his virtual self from "Freud's" perspective, he develops an overwhelming sense of compassion. The dialog goes back and forth and it culminates into his seeing his problem in a different light. He caught on that he was in a pattern of cyclical guilt brought about each time one of the people visiting his mother would send an email update. In a guilt infused state, he was constantly questioning his decision to leave her there.

As he relayed his experience in the podcast interview, he was still emotionally affected by this experience. It went beyond the seeing this issue in a different light, it went to a deeper level to seeing himself as a vulnerable human, struck with this guilt loop, trying to do the right thing. He walked away from the experience with not only a sense of ease over his decision, but also a deeper sense of being a human. This was done in a virtual environment.

Can we do this for ourselves and each other in our day to day real human lives? As we lose ourselves in our devices, can we look up more and at each other with a softer approach? We are doing what we can to figure things out, successes and failures.


Listen to the Radio Lab Podcast: More or Less Human -->