Driven to Discover
A podcast that explores innovative University at Buffalo research through candid conversations with the researchers about their inspirations and goals.
Driven to Discover
Making Chatbots Better with Rohini Srihari
Not all chatbots are created equal. Some, like those used in customer service, are relatively simple. Others—like the systems Rohini Srihari builds—can take on far more complex tasks, such as giving a voice to someone with ALS who has lost the ability to speak. In this episode, Srihari, an artificial intelligence pioneer at the University at Buffalo, shares how she combines her love of language with computer science to create AI-based tools that not only help people with motor neuron diseases communicate, but also address the nation’s mental health crisis, predict the flow of refugees, and even assist choreographers in creating new works. As she tells host Cory Nealon, this work represents just the beginning of how AI can be harnessed for the public good.
Credits:
Host: Cory Nealon
Guest: Rohini Srihari
Writer/Producer: Laura Silverman
Production and editing by UB Video Production Group
Coming Dec. 9: Sleep is a basic biological function, so why do so many people struggle with it? Nationally recognized sleep expert Carleara Weiss unpacks the mysteries of why we sleep and what happens when we don’t; discusses her research on the connection between sleep and Alzheimer's disease; and shares her top tips for getting a better night’s rest.
Cory Nealon: If you're like me, you're a bit leery of chatbots. You swat them away like a fly when they appear on your screen. You avoid talking to Alexa and Siri. And inevitably, you yell “representative” when a robot answers your call.
Rohini Srihari: Oh yeah, I've dealt with that too, and it can be very frustrating.
Cory Nealon: That's Rohini Srihari, a pioneering computer scientist, educator and entrepreneur at the University at Buffalo. She understands skeptics like me, but she also believes that chatbots, when done right, can address major societal challenges. Take, for example, the nation's mental health crisis, or the need to improve communication devices for people with ALS and other motor neuron diseases. Today, we'll explore how Srihari is leveraging both her love of language and her decades of experience in artificial intelligence to improve public trust in AI and ultimately help people in need.
My name is Cory Nealon, and I will be your host for this episode of Driven to Discover. It's called “Making Better Chatbots.”
Dr. Rohini Srihari, thank you for joining us.
Rohini Srihari: Well, thank you for the opportunity to speak about my work.
Cory Nealon: So I hope that introduction wasn't too negative. I'm aware that many people use chatbots with tremendous results, but in your work, you state one of the main challenges this field must overcome is trustworthiness. So why do some people not trust chatbots?
Rohini Srihari: Well, it's a very new technology, and people are always a little wary of new technology. Especially with chatbots, because they almost seem magical, and people don't quite understand how it works. So there's a little bit of skepticism there. But it's interesting, because not everyone feels that way, and with some work that we did, we found that younger people, especially kids, are not bothered by chatbots at all. They feel very comfortable speaking with them.
Cory Nealon: So for people who are uncomfortable with chatbots, how do we overcome this hurdle?
Rohini Srihari: Well, I think sometimes longer conversations can be really helpful, and dialogues. The more you talk to a chatbot, if it's a good chatbot and it's responding correctly, then over time, you begin to, you know, form some sort of comfort level with it. But in general, in order for people to trust chatbots, they have to show certain characteristics. They have to be empathetic, they have to be socially responsible, they have to adapt to the user's personality. And all of these types of criteria, I think, are going to be very helpful in order for people to get more comfortable with chatbots.
Cory Nealon: So, before we get too far into this, can you please just tell us what a chatbot is?
Rohini Srihari: At the very simplest, chatbots are, you know, software tools. You see them in social media all the time. As soon as someone says something, it immediately responds with a canned response. So that's at one end of the spectrum. The other end is where, you know, my work is more interested, and we call that conversational AI, because this is not just, you know, sending out canned responses to what someone says. It's actually listening to the person, looking at context, even their gestures, their personality, and responding based on that.
Cory Nealon: I'd like to back up and provide our listeners with a bit of context. You grew up loving to read, and you figured that literature might be your path in college. What was it about the written word that drew you in?
Rohini Srihari: Well, in my very young formative years, I lived in a place where there really wasn't that much entertainment except for the library, and the library was a little bit ancient and had a lot of classics and so on. And so that's what I had access to, and I started reading, you know, Charles Dickens, Jane Austen, all of these authors. But even to this day, I think what really impresses me is the use of language. You know, language is so powerful, and it can express very complex thoughts, and so the better you use the language, I think, the better you are doing in terms of reaching out to the reader.
Cory Nealon: So, while you love books, you were also very good at math, the field you ultimately chose to study when you went to college. This led you to earning a PhD in computer science here at UB. And at the time, there was a lot of exciting AI research happening here at Buffalo. Can you tell us a little bit what it was like back then?
Rohini Srihari: You know, people characterized AI as interesting stuff but that didn't really work. And once it actually started working, they no longer called it AI; it became software, you know, systems or something like that. But there was a lot of interesting work going on in AI, and there were different disciplines, different methodologies. Some people were working on very knowledge-based approaches to AI, like expert systems and actually encoding knowledge manually, you know. There were other people who worked on machine learning approaches to AI, and that is, for example, handwriting recognition—what are the features that can characterize handwriting, the number of swirls, the number of, whatever, circles, things like that, and then use that as part of a machine learning system to try and recognize letters first, and then addresses, and so on.
And of course, now we're in the new age where we're using techniques like generative AI and large language models, and those are different from the previous two generations, because here, we don't even need to tell the computer what features to look for. You just feed it data, and it learns representations, and then, based on that, it can generate new data on its own. I think I've been fortunate to have been present through all these generations of AI, and I've actually worked on all three.
Cory Nealon: One area that you're currently working on is AI and mental health. Can you tell us what that landscape looks like right now?
Rohini Srihari: Well, there currently is a crisis in the area of mental health support, partly because more and more people are facing these issues, whether it's due to isolation, other kinds of stressors, COVID, work-related. But then there's also a lack of resources. There aren't enough people that are professional counselors to help these people, and sometimes people don't even seek that kind of help due to whatever stigma and so on. So that's the landscape we're in. So we are looking to see if technology, namely conversational AI, can help at least as the first level of support, you know, to help someone. Sometimes even to see if this person requires further follow-up and counseling and so on.
Cory Nealon: You have common AI tools like ChatGPT that people are utilizing as mental health resources. Is there a danger to that?
Rohini Srihari: You're absolutely right. People are using chatbots for mental health support, as a conversational partner to overcome isolation and all that. The danger is that these tools have not been vetted for that purpose. There's been no evaluation that's been conducted, and the mental health and the counseling communities are not involved in this. And that has sort of influenced our work, and our work has shifted from actually creating the chatbots that provide mental health support to coming up with systems that evaluate how good they are.
So what we would like to do is come up with a way of measuring: “Are these really doing the job?” And one of the things that we have incorporated into our work is the research from the actual counseling community. They've written volumes of texts on counseling strategies and also rubrics for measuring effectiveness of a particular session and so on. So we have incorporated that, believe it or not, into yet another chatbot that can actually be trained to evaluate a particular conversation and see, “Is this effective?” “Was progress made towards achieving the goals?” And so on. So I believe that that's a direction that we're going to see more and more, is people who are going to be vetting these chatbots, especially for medical advice and so on. You need some assurance that these are providing accurate information and not making things even worse than before.
Cory Nealon: So another project that you're working on is communication devices for people with motor neuron diseases. Now this could be someone, for example, the late scientist Stephen Hawking, who has much to say but is not physically able to do so. So how do you build a tool that effectively speaks for someone else?
Rohini Srihari: Yeah, I think the first thing I'd like to start off saying is what the pain point is for these users of these devices. They can only generate maybe four, five words a minute because of their limitations on their motor abilities, some people are using eye gaze tracking. So, what happens is that most of their communication is limited to transactional type of language: “I need a glass of water.” “Can you get me my glasses?” Things like that. And what these people really want to do is engage in conversation, real conversation. We just had a conversation with a user last week, and for a while they were talking about their personal experiences, and then suddenly they wanted to talk about the Buffalo Bills. So, you know, these people really want to participate in conversation, but the devices that they're using don't permit them to do that.
So in order for a chatbot to speak on behalf of a user, rather than to the user, first of all, the chatbot needs to know a lot about the user. It needs to know their personal experiences. It needs to understand their personality. It possibly needs to recognize the gestures that they use, how they actually use the device. And each of these people uses different types of devices. So we're talking about personalization on many different dimensions. So the way the system actually works is the user is talking to someone, the other person says something, the chatbot listens to what they say, and the speech is transcribed into text that's fed into our chatbot. It will generate three possible responses that the user might want to say, and these responses would be a couple of sentences long, and it would speak them out to them or show them, depending on their abilities. And the user would either select one of those responses as, “this is what I want to say,” or they can override that and say, “no, this isn't what I meant.” So the goal is, with minimal input from the user, to be able to generate much longer responses, but also allowing the user to steer the conversation.
It's very meaningful work. It's probably the most meaningful project that I've worked on, and we're actually testing it with real users, and that gives us so much feedback in terms of, you know, how we should improve our systems to respond to them. And we're happy with the progress that we're making, but we're still not quite there yet, but we're working towards it.
Cory Nealon: What got you started on that project?
Rohini Srihari: I was introduced to a professor working in the communicative disorders department, Professor Higginbotham, and he was talking to me about this project, and I said, you know, this is a place where chatbots can really make an impact. At this stage in my career what I really want is something more than papers and grants, and this was a project where I felt that, I know that this technology can help, but we have to work on actually coming up with a deployable system and to see if it really can make an impact on this user: Would they start using it? Would it improve their lives?
Cory Nealon: So, for both of these projects, your work addressing mental health needs as well as people with motor neuron diseases, you've relied upon Empire AI, which is a New York State-based research consortium that's focused on AI for the public good. You've been able to utilize Empire AI’s supercomputer, which is located here at the University at Buffalo. Can you tell us how Empire AI has helped you advance your research?
Rohini Srihari: Empire AI has been a tremendously valuable resource for us. What it is is basically a huge installation of computers. And these are GPUs, which are especially useful for some of the AI models that require very heavy computation. It's hard to get your hands on enough of these to train large models. And in our work, in our research work, we're using open-source LLM, so we're not just relying on ChatGPT. We're taking very large open-source versions of that kind of technology and then training it, fine tuning it, for our particular application. So in order to do that, we need fairly sophisticated and extensive computing resources. And I can say that without Empire AI, we would not have been able to do it.
Cory Nealon: And the robustness of it, I mean, it allows this work to occur much quicker, correct?
Rohini Srihari: Absolutely. Something that took us five days, we are now completing in less than a day, in a few hours. And it's all experimental with AI. You have to keep running it over and over again, observe, change the model, tweak the model. So if you can reduce each cycle, you know, from five days to a few hours, that's very helpful. And we are in the alpha phase, but the beta phase is just kicking off where there's going to be even more hardware. I think I heard someone say that Empire AI beta will be the largest academic installation of GPU-based servers in the world.
Cory Nealon: That's incredible. Wow. So your work in mental health and motor neuron diseases, I know these things take time, nothing is instantaneous, but do you have a sense of when these tools might be out there in the world for use?
Rohini Srihari: That's a good question. Perhaps, optimistically, in a year, maybe? Maybe longer, depending on, you know, how comfortable we want to be in terms of deploying the solution on a larger scale. We're testing with a very small number of users right now. The next step would be to do larger scale evaluations, maybe within a year, and see where we go from there.
Cory Nealon: And your work isn't limited to what we discussed. What are some of the other things you're working on? I know you can't give an in-depth description, but how else are you using AI for the public good?
Rohini Srihari: Yeah, there's so much opportunity right now to use AI for social good. I have been involved in using AI technology for aid and development agencies such as the World Bank. And so one application is predicting refugee inflows from one country to another. And this is very sophisticated technology because it takes into account data like satellite imagery, weather data, news, social media to predict when there's going to be an influx of refugees and so on. And that's very important for these agencies to know ahead of time—if we can predict that, right—so that they can plan around it, avoid conflict and things like that. So there are many applications like that. Agriculture—the optimal time to, you know, water or spray crops, things like that. There's so many of those, and I get excited every time I see something new.
But there's also another area that I think is in its infancy, and that is the use of AI in the arts. So I am focused on a project where we're looking at a digital assistant to a choreographer. And so the idea here is not to replace humans. That's never the intention. It's more to allow humans to explore their creativity. Maybe get the AI to challenge the humans. You know, What if? Have you considered something like this? Or whatever. And a lot of the times, what it produces is going to be garbage, but once in a while it might produce something interesting. So it's really a collaboration between the human and the AI agent.
Cory Nealon: Well that sounds like a lot of work to do on so many levels, and very exciting as well. I want to thank you for taking the time to be here with us today, and I wish you the best of luck in your future endeavors.
Rohini Srihari: Thank you so much. I've enjoyed speaking with you, and there's always something new.