Seminar: Modeling Human Interaction: Analyzing Recorded Meetings for Improved Speech Understanding Systems Posted on April 28th, 2005 by

Rebecca Bates

Tuesday, May 3, 2005 at 11:30pm in Olin 320

Speech understanding requires examination of language at many levels. Accurate speech recognition systems produce one level since these produce automatic word-level transcriptions. Computer understanding of human speech requires more than a string of words; it requires capturing the meaning of those words in all contexts. When studying human meetings, there are two perspectives that can be taken: content and interaction. Content can be examined by looking at words and topics. Interaction has been investigated by looking at discourse information, as defined by sentence-level dialog acts such as “statement”, “question”, “agreement”, or “backchannel”. This work looks at higher levels of interaction that are comprised of sequences of dialog acts. Examining dialog structure rather than specific words or topics allows for generalizable models of meeting style that are independent of a particular meeting task. We have defined high-level interaction labels, called meeting acts, that can be consistently applied by student researchers to recorded meetings. Examples are “negotiation”, “discussion”, “brainstorming”, “reporting”, and “planning”. These labels are used to train statistical models of interaction that can be used to classify meeting style. This talk will present an overview of speech recognition systems and then present current work investigating human interaction during meetings.

Pizza will be served.

Rebecca Bates is an Assistant Professor in the Computer & Information Sciences Department at Minnesota State University, Mankato

 

Comments are closed.