Speaker Series
Assessment Vulnerability: Systematically Identifying Student Overreliance on Large Language Models in Learning Processes
- Presenter: Evgenia Samoilova, Ph.D., Postdoctoral Researcher, Chair for Complex Multimedia Application Architectures, University of Potsdam; Tobias Moebert, Ph.D, University of Potsdam
- Date: July 28, 2025
- Location: Virtual - Microsoft Teams, 3:00PM - 4:00PM
- Abstract: This presentation describes our journey from evaluating LLM capabilities to identifying when students become overly dependent on these tools. We started in 2023 by testing ChatGPT-3.5 on 22 university exams across different subjects, finding it could pass about half of them. Through discussions with faculty and reviewing recent research on LLMs in education, we realized the more important question wasn’t what these models can do, but how students might become too dependent on them. Since late 2024, our current work at the University of Potsdam computer science department focuses on developing a practical method to identify when students are overly relying on LLMs. Instead of using the same approach for every situation, we consider factors like available resources, learning objectives, and the specific subject matter. Based on research in human-AI collaboration, we look at how students perform compared to AI and examine different types of tasks. The presentation will cover the problem of overreliance and relevant research findings, our approach to detecting it, and examples from five computer science courses we have examined. Our goal is to better understand how LLMs fit into student learning and help develop assessment methods that are both educationally sound and realistic about AI use in academic settings.
- Bio: Tobias Moebert is a computer scientist and researcher with a strong background in human-technology interaction and mobile application development. He earned his degree in Computer Science from the University of Potsdam and began his career in the tech industry from 2008 to 2013. There, he specialized in developing complex web applications and later focused on iOS mobile apps. Since 2013, Tobias has contributed to several interdisciplinary research projects as a researcher, including the ZIM project MOTIVATE, and multiple BMBF-funded initiatives such as EMOTISK, ComplexEthics, and miiConsent. In 2023, he was involved in a study examining the performance of large language models in online exams. He completed his PhD on the Perception of Complexity in Human-Technology Interaction, furthering his expertise in how users engage with complex digital systems. Tobias is also active in academia as a lecturer, currently teaching the courses Pervasive Computing and Ethics for Nerds at the University of Potsdam. In May 2024, he was co-awarded the State Teaching Award for the seminar Ethics for Nerds, alongside colleagues Ulrike Lucke, Ann-Marie Gursch, and Lilian Hasse.
Evgenia Samoilova is a postdoctoral researcher at the Chair for Complex Multimedia Application Architectures, University of Potsdam. Her research bridges digital education, data literacy, and the role of AI in higher education. Her current empirical work focuses on how large language models (LLMs) are shaping assessment practices in academic contexts. At QUADRIGA, the Berlin-Brandenburg center for data literacy, she is part of the academic leadership team and conducts applied research on instructional design for research data and research software education. Her background includes work on online learning engagement, learner workload, and data quality. She has collaborated across disciplines to advance sustainable, open approaches to data literacy development. She holds a Ph.D. in Sociology from the Bremen International Graduate School of Social Sciences. She is an alumna of the Wikimedia Open Knowledge Fellowship Program and the German-American Frontiers of Engineering (GAFOE) symposium, co-organized by the Alexander von Humboldt Foundation and the U.S. National Academy of Engineering.
Designs to Support Better Visual Data Communication
This talk is held as a part of USF’s Bellini College of Artificial Intelligence, Cybersecurity and Computing RISE Speaker Series.
- Presenter: Cindy Bearfield, Ph.D., Assistant Professor, School of Interactive Computing, Georgia Tech
- Date: April 18, 2025
- Location: Virtual - Microsoft Teams
- Abstract: Well-chosen data visualizations can lead to powerful and intuitive processing by a viewer, both for visual analytics and data storytelling. When badly chosen, visualizations leave important patterns opaque or misunderstood. So how can we design an effective visualization? I will share several empirical studies demonstrating that visualization design can in@luence viewer perception and interpretation of data, referencing methods and insights from cognitive psychology. I leverage these study results to design natural language interfaces that recommend the most effective visualization to answer user queries and help them extract the ‘right’ message from data. I then identify two challenges in developing such an interface. First, human perception and interpretation of visualizations is riddled with biases, so we need to understand how people extract information from data. Second, natural language queries describing takeaways from visualizations can be ambiguous and thus dif@icult to interpret and model, so we need to investigate how people use natural language to describe a speci@ic message. I will discuss ongoing and future efforts to address these challenges, providing concrete guidelines for visualization tools that help people more effectively explore and communicate data.
- Bio: Cindy Xiong Bearfield is an Assistant Professor in the School of Interactive Computing at Georgia Institute of Technology. Bridging the fields of psychology and data visualization, she aims to understand the cognitive and perceptual processes that underlie visual data interpretation and communication. Her research informs visualization design that elicits critical thinking and calibrated trust in complex data. She received her Ph.D. in Cognitive Psychology and M.S. in Statistics from Northwestern University. Her research has been recognized with an NSF CAREER award. She has received paper awards at premier psychology and data visualization venues, including ACM CHI, IEEE PacificVis, Psychonomics, and IEEE VIS.
Navigating NLP in the Generative AI Era: Challenges, Risks, and New Frontiers
- Presenter: Bonnie Dorr, Ph.D., Professor, Department of Computer and Information Science and Engineering, University of Florida
- Date: April 7, 2025
- Location: Virtual - Microsoft Teams, 3:00PM - 4:00PM
- Abstract: This talk explores the future of Natural Language Processing (NLP) in the Generative AI (GenAI) era, highlighting the need for hybrid approaches that integrate linguistic principles with neural models to enhance interpretability, capture implicit meanings such as beliefs and intentions, and ensure transparency. Representative examples of GenAI output illustrate areas requiring further exploration, particularly in relation to task-specific goals, such as machine translation and social engineering detection. Recent research in UF’s NLP&Culture Laboratory further exemplifies these principles: (1) addressing ambiguities with external knowledge to produce more robust and explainable inferences; (2) using semantic role labeling to detect and address communication divergences; (3) combining structured chunking with neural techniques to optimize entity and relationship recognition; and (4) leveraging NLP-driven metrics to assess how communication dynamics impact vulnerability management. By embedding structured linguistic insights into GenAI models, these systems can become more reliable, interpretable, and adaptable to diverse linguistic contexts, tackling key challenges while unlocking new opportunities for NLP applications.
- Bio: Bonnie J. Dorr is a Professor in the Department of Computer and Information Science and Engineering at the University of Florida, where she directs the Natural Language Processing & Culture (NLP&C) Laboratory. She is also an affiliate of the Florida Institute for National Security, former program manager of DARPA’s Human Language Technology programs, and Professor Emerita at University of Maryland. Dorr is a recognized leader in artificial intelligence and natural language, specializing in machine translation and cyber-aware language processing. Her research explores neural-symbolic approaches for accuracy, robustness, and explainable outputs. Applications include cyber-event extraction for detecting and mitigating attacks, detecting influence campaigns, and building interpretable models. She is a NSF PECASE Fellow, a Sloan Fellow, and a Fellow of AAAI, ACL, and ACM.
Personas, Roleplaying, and the Imitation of Human Cognition in Large Language Models
This talk was held as a part of USF’s Institute for AI+X Seminar.
- Presenter: Stephen Steinle
- Date: March 14, 2025
- Location: ENB 118, 1:00PM - 2:00PM
- Abstract: One of the growing trends in the use of large language models (LLMs) is to use descriptions of people and their preferences to influence response generation. These descriptions, known as personas, have been shown to substantially impact performance on a variety of tasks and continue to remain an area of rapidly advancing research. This discussion will introduce the concept of personas, implementations in research, an example use case in published work, and some potential pitfalls. Cognitive modeling will also be briefly discussed to highlight the distinction between LLM roleplaying and human cognition.
- Bio: Stephen Steinle is a second year Ph.D student in the Computer Science department at the University of South Florida. He works in the Advancing Machine and Human Reasoning (AMHR) Lab under the guidance of Dr. John Licato. His work has concerned cognitive modeling, next word prediction, and the methods used to make LLMs act more similarly to individuals in the field of Digital Twinning. His current projects involve multi-agent interactions during adversarial and collaborative tasks such as board games and wargames.