Visual Analysis of Verbatim Text Transcripts
University of Konstanz and UOIT
Verbatim text transcripts capture the rapid exchange of opinions, arguments, and information among participants of a conversation. As a form of communication that is based on social interaction, multiparty conversations are characterized by an incremental development of their content structure. In contrast to highly-edited text data (e.g., literary, scientific, and technical publications), verbatim text transcripts contain non-standard lexical items and syntactic patterns. Thus, analyzing these transcripts automatically introduces multiple challenges.
In this talk, I will present approaches developed (in context of the VisArgue project) to enable humanities and social science scholars to get different perspectives on verbatim text data in order to capture strategies of successful rhetoric and argumentation. To analyze why specific discourse patterns occur in a transcript, three main pillars of communication are studied through answering the following questions: (1) What is being said? (2) How is it being said? (3) By whom is it being said?
In addition to reporting on visualization techniques for the analysis of conversation dynamics, I will argue for the importance of tuning automatic content analysis models to unique textual characteristics, appearing, for example, in verbatim text transcripts. In particular, I will present a visual analytics framework for the progressive learning of topic modeling parameters. Our human-in-the-loop process simplifies the model tuning task through intuitive user feedback on the relationship between topics and documents.
Example case study
Mennatallah El-Assady is a PhD candidate in the group for Data Analysis and Visualization at the University of Konstanz (Germany) and in the Visualization for Information Analysis lab at the University of Ontario Institute of Technology (Canada). Her doctoral studies are co-supervised by Dr. Daniel Keim and Dr. Christopher Collins. Her general research interest is in combining data mining and machine learning techniques with visual analytics, specifically for text data. In particular, she is researching methods of the automatic analysis and visualization of transcribed verbatim text corpora. She has gained experience in working in close collaboration with political science and linguistic scholars for over several years as part of the VisArgue project where she has been the lead on developing a visual analytics framework to analyze conversations and political debates. In the last years, she has initiated and co-organized two editions of the Visualization for the Digital Humanities workshop, collocated with IEEE-VIS (2016, 2017), in addition to the workshop on Visualization as added value in the development, use and evaluation of Language Resources, collocated with LREC (2016, 2018).
Audience: Internal to U of T