Creative Coding Final Project: a voice chatbot that literally forces you to talk about your feelings

It’s time for finals…is time even real?? This year’s final is probably the most memorable final I have ever had. Because for the first time I’m doing my finals at home, completely away from other stressed-out NYU kids at Bobst. When I started thinking about what I’m going to do for my coding final project, I thought I would create a program inspired by something I learned from this chaotic year.

As I was talking to my dad at home, I realized that I have rarely heard him talk about his feelings. When I asked him about it, he smiled and said “well, because I am a man and I have to be strong”. Now, that just sounds like pure BS from my queer perspective of gender performance. Jokes aside, one thing I have noticed about myself this year is that I have become more conscious of how I feel and I am more willing to reflect on these emotions. Therefore, I decided to make a program that provides people a chance to share whatever they have to say without judgment and encourages them to talk about and reflect on their feelings.

To make a chatbot, I have to figure out how to make the program capable of responding to any sentence, which means I will have to write a script for the bot’s “brain”. There are several types of codes that are specifically developed to make chatbots; the most popular one being AIML, an XML dialect that formed the basis of A.L.I.C.E bot. As I was looking for a legit platform/tool where I can learn the syntax and write out the codes, I stumbled upon RiveScript, a plain-text, line-based scripting language that looks super easy to learn. I decided to use RiveScript for my project because it fits my timeline and I found several examples and tutorials that gave me a good sense of how the language works.

After going through a couple of examples and videos, I started writing out my script with some help from RiveScript’s documentation.

Some of my scripts for the bot

I want users to be able to talk to the program, so for my user input, I used a speech recognition library called p5.speech. It essentially listens to what users are saying via the mic and converts that to strings through Google’s speech-to-text service.

With these two main elements ready, all I need to do is to put everything together. This is when problems started to show up. Because the bot is operating from a separate file, it needs to make sure everything is loaded before responding. Here, instead of using preload/callback functions, I decided to use async/await functions so that it can run simultaneously with the rest of the sketch.

The p5 web editor seems to have a syntactical problem with async/await, but it runs normally.

However, after I have loaded my script, the bot would only return one response (which is the catch-all response) no matter what the input says. After trying out a few things and double-checking my RiveScript syntax, it still didn’t work as intended. As I was about to give up, I tried one last thing, which is to put the whole response function into the callback function I wrote for the p5.Speech library. Guess what?? IT WORKED! I still couldn’t fully understand why, but it seemed like the user input (string) would not be carried through to the bot when I put the function outside.

Image for post
Image for post

So, with the main feature of the chatbot ready to go, I created my visuals.

When you just opened the sketch, a Shiba would show up and wait for your response.

When the user says hi, the Shiba would put out a response in the textbox, with its mouth open.

The chatbot would also randomly ask for the user’s name at the start, and it would memorize and greet you with your name.

The Shiba will patiently listen to whatever you say and encourage you to dig deeper into how you are feeling.

When the user finally reaches a point where they explicitly talk about how they feel, the bot will return with some words of encouragement and prompt the user to return at any time if they need someone to vent.

Experience the chatbot here:

I also want to take an extra step and incorporate some physical elements into the project. So I thought about using an infrared sensor to detect when someone is sitting down in front of the chatbot. I hooked the sensor up and used an example from a previous homework and tweaked it so that it works as a switch. The sensor will start the sketch when it senses someone coming in front of it.

Video of the Arduino working:

Photo of the Arduino connected with the sensor:

Image for post
Image for post

Some reflections:

This program was super fun to build, despite the stress from finals. If I had more time & resources, I would even extend the physical part and make an entire physical interface for the program. I can definitely see myself coming back to this in the future.

This semester has been a good one. Although there were many events and meet-up that weren’t able to happen because of the current circumstances, I was still able to challenge my understanding of programming and continue to expand my coding skills. I am deeply grateful for the opportunity, and for having a professor who’s so patient and helpful throughout the semester. Thanks so much, Scott!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store