Hahaha oh man, the conversations between the trwo robots are highly entertaining. I used to play around with a chatbot yeeeaaaars ago called MegaHAL, but its memory only held a week or two's worth of information, and only using words and phrases people already fed it. You got some pretty crazy stuff out of it that way.
When I was in around 8th grade our school library got a computer with a virtual "counselor" program on it. The idea was that students would go into this little room in the library and type their problems to the program, which would use speakers and text to respond. I think it was supposed to give advice and to keep the student talking until they worked out their problems on their own. (I wonder what my psychology-related friends would have thought of it.) I only messed with it a few times, mostly because a) it was a computer program which could speak and respond, which was novel at the time and b) I spent a lot of my free time in the library that year for various reasons. Anyway, long story short, within a few weeks students who had been "conversing" with it had "taught" it to respond to even relatively ordinary statements with profanity. (In retrospect I find this ironic, because my memories of that school suggest that they could have gotten the same response walking down the hallways.)Still, interesting.
Interesting, I suppose. I wonder if they could have a longer conversation if you seeded a question that was less philosophical? I haven't really tried chatting with any of these chatbots myself, so I have no real way of knowing.
Interestingly, this article does remind me of something from my information ethics class. It'd take too long to get into it here, but a big part of the paradigm we've been learning is that other people are deserving of respect and equal treatment, and we come to understand this through something called a Fundamental Moral Experience, which is an experience in which we recognize the personhood of another person. (I had to write a paper on it; too long to post here.) So far in all of the things I have read for the class no-one has established criteria for "personhood", so I'm wondering: at what point can a "thinking" program be considered to be a "person" with ethical rights and obligations? I may have to pass this along to the class...
Pfft, the chat bots that make Jerk City have been huaghlguahglhuagling and calling each other faggots for almost 10 years.
But yeah it is pretty cool, especially when they start emoting to each other :3
You can see a lot of the advanced chatbot technology being used in spam bots, to the point where spambot comments on youtube are actually becoming more coherent and topical than the actual comments.
Comments 7
Reply
Reply
Reply
Didn't we also program it to yell at you if you mentioned Shaq, or something?
Reply
Reply
Interestingly, this article does remind me of something from my information ethics class. It'd take too long to get into it here, but a big part of the paradigm we've been learning is that other people are deserving of respect and equal treatment, and we come to understand this through something called a Fundamental Moral Experience, which is an experience in which we recognize the personhood of another person. (I had to write a paper on it; too long to post here.) So far in all of the things I have read for the class no-one has established criteria for "personhood", so I'm wondering: at what point can a "thinking" program be considered to be a "person" with ethical rights and obligations? I may have to pass this along to the class...
Reply
But yeah it is pretty cool, especially when they start emoting to each other :3
You can see a lot of the advanced chatbot technology being used in spam bots, to the point where spambot comments on youtube are actually becoming more coherent and topical than the actual comments.
Reply
Leave a comment