Colloquia

Spring 2025 Colloquia

Unless otherwise noted, Spring 2025 colloquia will be held on Thursdays at 12:45 pm in Stanley Thomas 316. All colloquia will be available for in-person attendance as well as remote attendance via Zoom. Current Tulane faculty, staff, and students are encouraged to attend in person. Zoom details will be provided via the announcement listserv, or you may email mrougelot@tulane.edu to request the corresponding link. If you would like to receive notifications about upcoming seminars, you can subscribe to the announcement listserv.

Feb 17

Understanding the Security Threats in Public Blockchains

Kai Li | San Diego State University

This talk will be held on Monday, February 17th, at at 11:00 a.m. in Stanley Thomas 316. Please note the special weekday and time for this event. Zoom details will be provided via the announcement listserv, or you may email mrougelot@tulane.edu to request the corresponding link. 

Abstract: Over the last decade, public blockchains have been increasingly adopted to revolutionize various traditional businesses, such as finance, supply chain, healthcare, and gaming. The advantages of blockchain are mainly attributed to its design of an open, decentralized, and anonymous peer-to-peer (P2P) network, which can eliminate centralized authority and protect users' privacy. However, such a design of the P2P network has brought new security threats. First, blockchain's open membership nature allows everyone to join the P2P network, which creates new attack surfaces for adversaries to join the network and launch Denial-of-Service (DoS) attacks against critical network infrastructures. Second, the anonymity nature of blockchain allows everyone to trade assets without revealing their real identity, which has created an ideal place for criminals to conduct various cybercrimes and steal assets from victims. These security threats are jeopardizing the usability of blockchain and posing significant financial risks to blockchain users. 

In this talk, I will discuss how we have enhanced public blockchains' security in two directions. In the first direction, we systematically analyzed and tested critical network infrastructures inside the blockchain, including RPC services, P2P networks, and Mempool, which has discovered and fixed multiple severe DoS vulnerabilities. In the second direction, we developed novel detection systems to identify emerging investment scams and phishing attacks on the blockchain, including free giveaway scams, arbitrage bot scams, and address poisoning attacks, which shed light on their deceiving strategies and protected users from falling into such fraudulent activities. 

About the Speaker: Kai Li is a tenure-track Assistant Professor at San Diego State University. He received a Ph.D. from Syracuse University in 2022. His research interests are broadly in system and network security, with the current focus on discovering vulnerabilities and detecting cybercrime activities in widely deployed decentralized systems such as blockchain. His research papers were published at top-tier cybersecurity conferences such as USENIX Security, ACM CCS, NDSS, ACM SIGMETRICS, ACM IMC, ESEC/FSE, etc. He also received several research grants from NSF and the Ethereum Foundation. He is currently serving on the Technical Program Committee in leading conferences such as ACM CCS, NDSS, ACM Web, ACSAC, CODASPY, ICCCN, etc. In addition, his research findings have been widely acknowledged by the bug bounty programs in the blockchain developer community.

Feb 20

Hardware-Assisted Software Support for Heterogeneous Computing 

Jiyuan Wang | University of California, Los Angeles

This talk will be held on Thursday, February 20th, at 12:30 p.m. in Stanley Thomas 316. Please note the special time for this event. Zoom details will be provided via the announcement listserv, or you may email mrougelot@tulane.edu to request the corresponding link. 

Abstract: The end of Moore’s law has led to a plateau in traditional single-core and software-based performance optimizations, highlighting the importance of incorporating hardware heterogeneity and specialization in software systems. Specialized hardware accelerators like FPGAs, GPUs, and quantum computers have become—or are poised to become—the prominent part of the future computing landscape. However, developing heterogeneous applications is limited to a small subset of programmers with specialized hardware knowledge. To unlock the full potential of heterogeneous computing, now is the time that the software engineering community should design new waves of testing and debugging tools for heterogeneous application development. 

In this talk, I will demonstrate how software tools can be reengineered to better support heterogeneous systems by understanding and leveraging hardware accelerators. First, I will show how understanding the root causes of quantum hardware noise enables the redesign of differential testing to isolate quantum software bugs from quantum hardware-induced noise. Second, I will illustrate how leveraging the parallel processing capabilities of FPGAs can enhance fuzz testing by accelerating test case generation and significantly boosting performance. Finally, I will share my vision for the future of compiler development in heterogeneous and quantum computing.. 

About the Speaker: Jiyuan Wang is a PhD student in the Computer Science Department at University of California, Los Angeles, where he is co-advised by Miryung Kim and Harry Xu. His research focus on developing efficient and effective testing and debugging methods for heterogenous computing, including GPUs, FPGAs, and quantum computer. By redesigning traditional software approaches with hardware-aware optimizations, he creates developer tools that bridge the gap between hardware accelerators and traditional software development. His work appears in top-tier conferences such as ICSE, FSE, ASE, and ASPLOS, and has been awarded an ACM SIGSOFT research highlight.

Feb 25

Redundancy Removal for Accelerating Graph Processing Workloads 

Mahbod Afarin | University of California, Riverside

This talk will be held on Tuesday, February 25th, at 12:30 p.m. in Stanley Thomas 316. Please note the special weekday and time for this event. Zoom details will be provided via the announcement listserv, or you may email mrougelot@tulane.edu to request the corresponding link. 

Abstract: Analyses of large graphs are becoming an increasingly important computational workload as graph analytics are applied across various domains. Consequently, a significant amount of research has focused on developing frameworks that exploit parallelism across diverse hardware platforms, from a single GPU or multicore server to clusters of servers and GPUs. This talk explores a complementary approach that, in addition to parallelism, comprehensively reduces redundancy to enhance scalability. Redundancy exists not only in computation and value propagation but also in graph traversal and data transfer across the memory hierarchy, all of which can be optimized for improved performance. In this talk, I will present three techniques to reduce redundancy in static and dynamic graph processing: Core Graph, Common Graph, and Unchanged Vertex Values (UVVs). Core Graph eliminates redundancy in static graphs by constructing a small proxy graph and leveraging a two-phase query evaluation. The first phase runs on the proxy graph, incurring low overhead while producing nearly precise results. The second phase refines these results on the original graph, ensuring full precision. Common Graph optimizes evolving graph processing by converting costly edge deletions into cheaper edge additions. To eliminate redundancy across snapshots, we identify a common subgraph across all snapshots and perform a single shared computation followed by incremental computations for each snapshot. This method also breaks the sequential dependency of traditional streaming approaches, unlocking new parallelism opportunities. UVVs leverage the observation that many vertex values remain unchanged across snapshots. Using intersection-union analysis, UVVs compute lower and upper bounds on vertex values, skipping redundant computations and focusing only on vertices with potential changes. By processing these efficiently across snapshots, we further minimize redundancy and enhance performance.  

About the Speaker: Mahbod Afarin is a Ph.D. candidate at the University of California, Riverside, working with Professors Rajiv Gupta and Nael Abu-Ghazaleh. His research lies at the intersection of graph algorithms and computer architecture, focusing on the performance and scalability of graph applications. His current interests include graph hardware accelerators, streaming and evolving graph processing, and code size reduction for ARM and x86 binaries. His papers have been published in MICRO, ASPLOS, and EuroSys. He has received several awards and fellowships, including the UCR Dissertation Completion Fellowship and the Dean’s Distinguished Fellowship. For more information, please visit his website at https://mahbod-afarin.github.io/.

Feb 27

AI for Security: Adaptive Attacks and Proactive Defenses in Dynamic and Distributed Systems 

Henger Li | Tulane University

Please join us for Henger Li’s PhD dissertation defense talk, which is described below.

This talk will be held on Thursday, February 27th, at 12:30 p.m. in the Lavin-Bernick Center, Room 210 (McKeever Conference Room). Please note the special time and venue for this event. Zoom details will be provided via the announcement listserv, or you may email mrougelot@tulane.edu to request the corresponding link. 

Abstract: The escalating complexity of cyber threats calls for transformative approaches that leverage artificial intelligence (AI) to develop adaptive attacks and proactive defenses. This dissertation explores "AI for Security" as a unifying theme, focusing on how advanced AI techniques including reinforcement learning (RL), game theory, and meta-learning, can address critical security challenges. By examining the interplay between offensive and defensive strategies, the research advances the understanding of vulnerabilities in existing systems and proposes innovative mechanisms to enhance resilience. The study is centered on two key domains: Moving Target Defense (MTD) and Federated Learning (FL). 

MTD employs dynamic system configuration changes to introduce uncertainty and mitigate risks, yet optimizing the spatial-temporal dynamics of these changes remains challenging. This dissertation develops a Stackelberg game-based model for MTD that integrates both migration and timing strategies, accounting for configuration-dependent costs and attack timing distributions. By modeling the defender's problem as a semi-Markov decision process, a nearly optimal strategy is derived, demonstrating how game theory and RL can significantly improve MTD's adaptability to persistent threats. 

In the realm of FL, traditional defenses often fall short against adaptive attacks. To better expose these vulnerabilities, the research introduces a model-based RL framework capable of simulating advanced poisoning and backdoor attacks. Experimental results reveal that non-myopic attacks can undermine even state-of-the-art defenses, emphasizing the need for more sophisticated, AI-driven countermeasures. 

Recognizing the shared challenges across MTD and FL, the dissertation proposes a unified Bayesian Stackelberg Markov game framework to address unknown and uncertain attack scenarios. This framework culminates in a meta-Stackelberg defense strategy that combines meta-learning with online adaptation, enhancing the system’s capacity to counter diverse and evolving threats. By showcasing the synergy between RL, game theory, and meta-learning, this research lays the foundation for a cohesive AI-driven methodology that advances both offensive and defensive capabilities, setting the stage for more robust security paradigms in dynamic and distributed environments.  

About the Speaker: Henger Li is a Ph.D. candidate in Computer Science at Tulane University, specializing in (Deep) Reinforcement Learning (RL), (Generative) AI Security, Large Language Model (LLM) Reasoning/Agents, Federated Learning, and Game Theory. He leverages learning-based tools, such as (deep) reinforcement learning, generative models, meta-learning, and game theory, to optimize and safeguard machine learning models and systems, including state-of-the-art classification models, federated learning frameworks, and large language models (LLMs), ensuring their efficiency and robustness. Academically, he has developed novel algorithms and frameworks, with his research published in top conferences such as NeurIPS, ICML, and ICLR, with open-source code on GitHub. In the industry, he has successfully completed internship projects at Amazon and Quantiphi that combined reinforcement learning with large language models, focusing on optimizing supply chain operations and enhancing the reasoning capabilities of generative models. 

Mar 13

Advancing Approximate Queries with Innovative Data Summaries and Generative Models

Fuheng Zhao | University of California, Santa Barbara

This talk will be held on Thursday, March 13th at 12:30 p.m. in Stanley Thomas 316. Please note the special time for this event. Zoom details will be provided via the announcement listserv, or you may email mrougelot@tulane.edu to request the corresponding link. 

Abstract: The exponential growth of data has introduced significant challenges for traditional query processing systems, creating a pressing need for faster and more resource-efficient approaches. Approximation techniques have emerged as a promising solution, striking an optimal balance between accuracy and performance. Data summaries, such as samples, sketches, and histograms, play a crucial role in this paradigm by condensing large datasets into compact representations and maintaining critical insights. Additionally, recent advances in generative models (including large language models) open new possibilities for handling incomplete information, accommodating diverse data types, and approximating complex computations at scale. In this talk, I will discuss my research on theoretically grounded data summarization methods, as well as my latest efforts to integrate generative models into data systems. Together, these contributions advance approximate query processing toward realistic, high-impact applications in modern data analytics. 

About the Speaker: Fuheng Zhao is a Ph.D. candidate at University of California Santa Barbara, advised by Professor Divyakant Agrawal and Professor Amr El Abbadi. His research has been recognized at top database and machine learning conferences such as VLDB, NeurIPS, and CIDR. He is a recipient of the Microsoft Ph.D. Fellowship, the Charles Dana Fellowship, and was honored with the Outstanding Paper Award from UCSB’s Computer Science Department.

 

Fall 2024 Colloquia

Nov 4

Metric Learning on Topological Descriptors

Yu (Demi) Qin | Tulane University

Please join us for Demi Qin’s PhD dissertation defense talk, which is described below.

This talk will be held on Monday, November 4, at 1:00 p.m. in Boggs 600. Please note the special weekday, time, and venue for this event. Zoom details will be provided via the announcement listserv, or you may email dramil1@tulane.edu to request the corresponding link. 

Abstract:  In today's data-driven world, analyzing and visualizing large, complex datasets, ranging from graph-structured data to high-dimensional point clouds and scalar fields, presents significant challenges in terms of scalability, efficiency, and noise robustness. Topological Data Analysis (TDA) has emerged as a powerful tool for capturing the inherent structure of such data through topological descriptors like persistence diagrams and merge trees. However, TDA faces limitations in computational scalability and model interpretability when applied to large-scale datasets. This dissertation bridges the gap TDA and machine learning (ML) by integrating advanced ML techniques to enhance the scalability, robustness, and interpretability of topological analysis for large-scale datasets. The core contributions include: (1) evaluating and applying topological descriptors across diverse domains, (2) developing efficient ML models for rapid topological descriptor comparisons, and (3) designing visualization techniques to enhance the interpretability of topological features in classification tasks. Applications span medical imaging, climate modeling, and graph analysis, providing improved diagnostic tools, precise weather forecasting, and efficient graph comparisons. This work establishes a foundational connection between machine learning and TDA, laying the groundwork for future advancements through scalable algorithms, enhanced interpretability, and a unified framework for diverse data types. 

About the Speaker: Yu (Demi) Qin is a Ph.D. candidate at Tulane University with expertise in machine learning, topological data analysis, and large-scale data processing. Her research focuses on developing advanced machine learning models for scalable data analysis and visualization. Her innovative work has been recognized through multiple internships and publications in leading venues, including a Best Paper Award at IEEE VIS 2024.

Nov 11

MAGIC: Map And Geographic Information Construction, Comparison and Conflation

Erfan Hosseini Sereshgi | Tulane University

Please join us for Erfan Hosseini Sereshgi’s PhD dissertation defense talk, which is described below.

This talk will be held on Monday, November 11, at 9:00 a.m. in Stanley Thomas 316. Please note the special weekday and time for this event. Zoom details will be provided via the announcement listserv, or you may email dramil1@tulane.edu to request the corresponding link. 

Abstract:  With the high volume usage of GPS devices and location-based services, new opportunities are created for data collectors to use such data and possibly improve on their products. This new direction brings several obstacles forward in both theory and application fronts of spatial computing. This dissertation investigates the process of updating road maps using GPS data, introducing novel approaches, enhancing existing methods, and establishing new theoretical frameworks. The map update pipeline is comprised of independent, scalable ideas that leverage concepts from computational geometry and topology to achieve the desired outcomes. The research consists of three main stages: Construction: Utilizing GPS data to create roadmaps, particularly for areas where alternative data sources are limited, optimizing an existing method based on Fréchet clustering. Comparison: Assessing results through refined Graph Sampling and introducing Length-Sensitive Fréchet Similarity (LSFS), a novel approach for comparison. Conflation: Merging road maps using Graph Sampling, identifying and integrating missing segments.

 

Nov 11

Interdisciplinary Project Presentations

Yunbei Zhang, Fangzheng Wu, Jiarui Li, Zixuan Liu, Harper Lyon, Zixiang (Zach) Yin, Yunsung Chung, Arie Glazier| Computer Science PhD Students, Tulane University

This event will be held on Monday, November 11th, from 12:30 p.m. to 2:00 p.m. in Boggs 600. Please note the special weekday, time, and venue for this event. Zoom details will be provided via the announcement listserv, or you may email dramil1@tulane.edu to request the corresponding link. 

Yunbei Zhang

Optimal Transport-guided Visual Prompting for Test-Time Adaptation

Abstract: Vision Transformers (ViTs) have shown remarkable capabilities in learning representations. However, their performance degrades when applied to unseen domains. Previous methods have typically engaged in prompt learning during training or modified model parameters at test time through entropy minimization. These approaches have limitations: the former often neglects unlabeled target data, while the latter does not fully address domain shifts. Our method, Optimal Transport-guided Test-Time Visual Prompting (OT-VP), addresses these challenges by leveraging prompt learning at test time to align target and source domains without altering pre-trained model parameters or accessing the training process. This method optimizes the Optimal Transport distance to learn a universal visual prompt for the target domain. With only four learned prompt tokens, OT-VP exceeds state-of-the-art performance across three stylistic datasets—PACS, VLCS, OfficeHome—and one corrupted dataset, ImageNet-C. Furthermore, OT-VP is efficient in terms of memory and computation and can be extended to online settings. 

Fangzheng Wu

The Visualization of 3D Medical Large Image Segmentation

Abstract: This study explores the potential of the Segment Anything Model 2 (SAM2) for effective 3D segmentation in medical imaging. By leveraging memory attention mechanisms, we investigate how SAM2 can capture relationships across 2D slices to build accurate 3D segmentations. A dedicated user interface (UI) facilitates this process, allowing users to input points or bounding boxes to generate initial 2D masks, which are then propagated through subsequent slices. The UI also supports iterative refinement, where users can adjust inputs at any slice to regenerate updated masks across the dataset. This design enables a smooth transition from 2D to 3D segmentation, providing a holistic view of medical structures through stacked 2D masks. Our work highlights SAM2’s adaptability to 3D medical data and offers insights into its capabilities for enhancing segmentation accuracy and user control in complex medical imaging scenarios.

Jiarui Li

Acceleration And Optimization of Conformational Stability Computation for CD4+ T-Cell Epitope Prediction 

Abstract: CD4+ T cells are crucial for adaptive immunity, and predicting antigenic peptides that bind to these cells remains a computational challenge. Traditional methods focus on peptide-MHCII binding, neglecting antigen processing. To address this, our group developed the Antigen Processing Likelihood (APL) algorithm, which utilizes COREX, a time-consuming conformational stability calculation algorithm. While the CPU-parallelized version of COREX reduced computation from hours to minutes, our GPU-accelerated version further cuts this to seconds. This enables both desktop and server use for rapid epitope prediction. Our approach incorporates novel GPU memoization and GPU-parallelizable Markov Chain Monte Carlo sampling, showing high efficiency on multiple antigen benchmarks.

Zixuan Liu

Enhancing LLM Safety Via Constrained Direct Preference Optimization

Abstract: The rapidly increasing capabilities of large language models (LLMs) raise an urgent need to align AI systems with diverse human preferences to simultaneously enhance their usefulness and safety, despite the often conflicting nature of these goals. To address this important problem, a promising approach is to enforce a safety constraint at the fine-tuning stage through a constrained Reinforcement Learning from Human Feedback (RLHF) framework. This approach, however, is computationally expensive and often unstable. In this work, we introduce Constrained DPO (C-DPO), a novel extension of the recently proposed Direct Preference Optimization (DPO) approach for fine-tuning LLMs that is both efficient and lightweight. By integrating dual gradient descent and DPO, our method identifies a nearly optimal trade-off between helpfulness and harmlessness without using reinforcement learning. Empirically, our approach provides a safety guarantee to LLMs that is missing in DPO while achieving significantly higher rewards under the same safety constraint compared to a recently proposed safe RLHF approach.

Harper Lyon

Improving Peer Selection Mechanisms with Incentives and Reviewer Allocation 

Abstract: Peer selection problems are a type of social choice problem in which a group of agents must decide on a rank, using only valuations of that group, themselves in order to assign a benefit or prize to the top ranked agents. Peer selection problem shows up in many areas, but the most familiar to academics is the peer review process used in many large conferences (e.g. NeurIPS). Most of the obvious mechanisms for collecting, aggregating, and selecting winners in these situations fall victim to agents attempting to game the system for their own benefit. Designing mechanisms that are “strategy proof” and do not fall victim to these pitfalls is a key area of research in social choice. A strategy proof mechanism is one that ensures that agents can not benefit themselves by lying about their ratings of other agents. In this presentation I will explain the basic elements of the peer selection problem and describe two lines of work that I and our research group have pursued towards improving the outcome of strategy proof peer selection mechanisms - the first by adding a secondary mechanism which attempts to grade agents based on the effort and accuracy of their reviews, improving the incentives of any peer selection mechanism, and the second an empirical evaluation of an already-in-use technique for efficiently allocating agent reviews to marginal agents/proposals to improve the accuracy of the overall mechanism.

Zixiang (Zach) Yin

iMIND: Interpretable Multi-subject Invariant Neural Decoding

Abstract: Deep learning models have achieved remarkable success across various computer vision tasks, prompting increased interest in their application to neuroscience, particularly for decoding visual information. Recent studies have focused on reconstructing visual stimuli captured during neural recordings, yielding high-resolution, high-precision results that bridge brain activity with visual imagery. Despite these advancements, existing methods provide limited insight into the underlying mechanisms of visual processing in the brain. To mitigate this gap, we present the Multi-subject Invariant Neural Decoding (MIND) model, a novel dual-decoding framework (encompassing semantic and biometric decoding) that facilitates neural interpretability in a data-driven manner and aims to deepen understanding of brain vision functionalities. Our MIND model consists of three key steps: establishing a shared neural space across subjects using a ViT-based masked autoencoder, disentangling neural features into subject-specific and object-specific ones, and performing dual decoding for both biometric and semantic classification tasks. Experimental results demonstrate that MIND achieves state-of-the-art decoding performance with minimal scalability limitations. Furthermore, MIND empirically generates voxel-object activation fingerprints that reveal object-specific neural patterns, and enables investigation of subject-specific variations in attentional response to identical stimuli. These findings lay the groundwork for more interpretable and generalizable multi-subject neural decoding, advancing our understanding of the object-voxel relationship and the brain’s visual processing dynamics.

Yunsung Chung

FBA-Net: Foreground and Background Aware Contrastive Learning for Semi-Supervised Atrium Segmentation

Abstract: Medical image segmentation of gadolinium enhancement magnetic resonance imaging (GE MRI) is an important task in clinical applications. However, manual annotation is time-consuming and requires specialized expertise. Semi-supervised segmentation methods that leverage both labeled and unlabeled data have shown promise, with contrastive learning emerging as a particularly effective approach. In this paper, we propose a contrastive learning strategy of foreground and background representations for semi-supervised 3D medical image segmentation (FBA-Net). Specifically, we leverage the contrastive loss to learn representations of both the foreground and background regions in the images. By training the network to distinguish between foregroundbackground pairs, we aim to learn a representation that can effectively capture the anatomical structures of interest. Experiments on three medical segmentation datasets demonstrate state-of-the-art performance. Notably, our method achieves a Dice score of 91.31% with only 20% labeled data, which is remarkably close to the 91.62% score of the fully supervised method that uses 100% labeled data on the left atrium dataset. Our framework has the potential to advance the field of semi-supervised 3D medical image segmentation and enable more efficient and accurate analysis of medical images with a limited amount of annotated labels.

Arie Glazier

Learning Behavioral Soft Constraints from Demonstrations

Abstract: Many real-life scenarios require humans to make difficult trade-offs: do we always follow all the traffic rules or do we violate the speed limit in an emergency? These scenarios force us to evaluate the trade-off between collective rules and norms with our own personal objectives and desires. To create effective AI-human teams, we must equip AI agents with a model of how humans make these trade-offs in complex environments when there are implicit and explicit rules and constraints. Agents equipped with these models will be able to mirror human behavior and/or draw human attention to situations where decision making could be improved. To this end, we propose a novel inverse reinforcement learning (IRL) method: Max Entropy Soft Constraint IRL (MESC-IRL), for learning implicit hard and soft constraints over states, actions, and state features from demonstrations in deterministic and non-deterministic environments modeled as Markov Decision Processes (MDPs). Our method enables agents to implicitly learn human constraints and desires without the need for explicit modeling by the agent designer and to transfer these constraints between environment parametrizations. Extending prior work, which only considered deterministic hard constraints in deterministic environments, we compare our method to the classical Max Margin method for IRL and observe that our entropy based method is able to better estimate the cost of the constraints.

 

Nov 14

Learning to Elect a Committee

Ben Armstrong | University of Waterloo

Abstract:  Social choice studies problems of how to make group decisions in the presence of conflicting preference information; such as dividing items fairly between recipients or running an election. We apply an axiomatic perspective to the task of selecting multiple alternatives from voter preferences by asking (1) how often existing social choice functions provide desirable axiomatic properties, and (2) whether machine learning can be used to develop a social choice function which is more likely to exhibit the same desirable properties. 

We show that by generating informative training data, neural networks are able to avoid violating a wide range of axioms much more frequently than most existing multi-winner social choice functions. Our results also provide insight into the similarity of existing social choice functions and highlight a tension between axioms focused on proportional representation and those focused on individual excellence.  

About the Speaker: Ben Armstrong is currently a PhD candidate in the School of Computer Science at the University of Waterloo. He studies topics at the intersection of social choice and machine learning with the goal of demonstrating the benefits of interdisciplinary research and highlighting the longstanding parallels between social choice and machine learning. 

 

Nov 19

Geographical and Temporal Adaptation in Social Media Analysis for Emergency Management and Public Health

Xintian Li | Tulane University

Please join us for Xintian Li’s PhD dissertation defense talk.

This talk will be held on Tuesday, November 19, at 8:30 a.m. in Boggs 600. Please note the special weekday, time, and venue for this event. Zoom details will be provided via the announcement listserv, or you may email mnelson10@tulane.edu to request the corresponding link. 

Abstract:  Social media has emerged as a crucial resource for capturing public sentiment and behaviors during crises, offering real-time insights that can aid in emergency management and public health response. This dissertation investigates methods to enhance social media analysis by developing adaptable models that account for geographical and temporal variations. Through case studies in public health and natural disaster response, it demonstrates how advanced text classification and forecasting models can be applied to Twitter data to predict behaviors such as vaccination uptake and hurricane evacuation intent. By combining social media sentiment with geographic data and incorporating domain adaptation techniques, these models improve the accuracy and applicability of predictions across diverse regions and events. The findings underscore the value of social media as a tool for crisis management, enabling authorities to make proactive, data-driven decisions that can help save lives and allocate resources more effectively.

 

 

 

Previous Colloquia