Previous Colloquia

Summer 2023 Colloquia

All colloquia, unless otherwise noted, will be available for in-person attendance as well as remote attendance via Zoom. Current Tulane faculty, staff, and students are encouraged to attend in person. Zoom details will be provided via the announcement listserv, or you may email to request the corresponding link. If you would like to receive notices about upcoming seminars, you can subscribe to the announcement listserv.

May 22

Machine Learning for Engineering

Pascal Van Hentenryck | Georgia Tech

This talk will be held on Monday, May 22nd, at 4:00 p.m. in Stanley Thomas 302. Please note the special weekday and time for this event.

Abstract: The fusion of machine learning and optimization has the potential to achieve breakthroughs in science and engineering that the two technologies cannot accomplish independently. This talk reviews a number of research avenues in this direction, including the concept of optimization proxies and end-to-end learning. Principled combinations of machine learning and optimization are illustrated on case studies in energy systems, mobility, and supply chains. Preliminary results show how this fusion makes it possible to perform real-time risk assessment in energy systems, find near-optimal solutions quickly in supply chains, and implement model-predictive control for large-scale mobility systems..  

About the Speaker: Pascal Van Hentenryck is an A. Russell Chandler III Chair and Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. He is also the the director of the NSF Artificial Intelligence Institute for Advances in Optimization. Prior to this appointment, he was a professor of Computer Science at Brown University for about 20 years, he led the optimization research group (about 70 people) at National ICT Australia (NICTA) (until its merger with CSIRO), and was the Seth Bonder Collegiate Professor of Engineering at the University of Michigan. Van Hentenryck is also an Honorary Professor at the Australian National University.

Van Hentenryck is a Fellow of AAAI (the Association for the Advancement of Artificial Intelligence) and INFORMS (the Institute for Operations Research and Management Science). Van Hentenryck’s research focuses on artificial intelligence and operations research for engineering applications: He explores methodologies that includes large-scale optimization and machine learning and applies them in challenging applications in energy, mobility, supply chains and logistics, privacy, and resilience. In particular, he leads the NSF AI Institute for Advances in Optimization, the Socially-Aware Mobility (SAM) project with a major focus on on-demand multi-modal transit systems, and the RAMC Project whose goal is to study risk-aware market clearing algorithms to integrate large shares of renewable energy. Together with MARTA, he piloted MARTA Reach, an on-demand service to explore the viability of on-demand multi-modal transit systems in Atlanta. Earlier in his career, Van Hentenryck designed and implemented several widely used optimization systems, including the constraint programming language CHIP (the foundation of modern constraint-programming systems) and the modeling language OPL (now an IBM product).  

July 18

Interpretable Visual Domain Adaption from Feature Representation to Multi-modal Semantics

Taotao Jing | Computer Science PhD Student, Tulane University

Please join us for Taotao Jing’s PhD dissertation defense talk, which is described below.

This talk will be held via ZOOM ONLY on Tuesday, July 18th, at 10:00 a.m. Zoom details will be provided via the announcement listserv, or you may email to request the corresponding link.

Abstract: Transfer learning has revolutionized the field of deep learning, allowing the utilization of pre-trained models to address challenges such as limited training data and expensive computational resources. However, the lack of interpretability and transparency in transfer learning methods poses significant obstacles to their practical deployment and trustworthiness. This doctoral dissertation is dedicated to enhancing the transparency and interpretability of visual domain adaptation, a critical task of transfer learning, encompassing feature representation analysis and integration of multimodal semantic knowledge. By addressing the domain shift across domains and providing human-friendly explanations simultaneously, we seek to provide deeper insights into the transfer learning process and facilitate more interpretable and trustworthy outcomes in real-world applications. The research provides valuable insights for integrating AI systems across domains, promoting transparency, interpretability, and trustworthiness in decision-making. Overall, it contributes to the development of interpretable transfer learning techniques, enhancing the understanding and practical application of deep learning models, and fostering transparent and collaborative human-AI interactions. 

July 21

Avik Bhattacharya | Computer Science PhD Student, Tulane University

Please join us for Avik Bhattacharya’s PhD dissertation defense talk, which is described below.

This talk will be held in Stanley Thomas 302 on Friday, July 21st, at 10:00 a.m. Zoom details will be provided via the announcement listserv, or you may email to request the corresponding link.

Abstract: CD4+ T-cell receptors recognize peptide-MHCII complexes displayed on the surface of antigen-presenting cells to induce an immune response. A fundamental problem in immunology is to characterize which peptides (i.e., epitopes) in an antigen induce such a response; this is the problem of computational epitope prediction. Almost all state-of-the-art CD4+ T-cell epitope prediction tools rely exclusively on peptide-MHCII binding affinity scores to make predictions and ignore the crucial phenomena of antigen processing. In this dissertation, we developed computational approaches that incorporate a structure-based model of antigen processing as a crucial part of epitope prediction framework. We discuss the application of the our approaches in multiple settings ranging from epitope prediction of pathogenic antigens to personalized epitope prediction for cancer neo-antigens. Our approaches show significant improvement over state-of-the-art approaches and provide better understanding of the MHCII pathway.  

Spring 2023 Colloquia

Unless otherwise noted, the seminars in Spring 2023 will meet on Thursdays at 12:45 pm in Stanley Thomas 316. All colloquia this semester, unless otherwise noted, will be available for in-person attendance as well as remote attendance via Zoom. Current Tulane faculty, staff, and students are encouraged to attend in person. Zoom details will be provided via the announcement listserv, or you may email to request the corresponding link. If you would like to receive notices about upcoming seminars, you can subscribe to the announcement listserv.

Jan 24

Understand, Predict and Enhance User Behavior in Mixed Reality

Yukang Yan | Carnegie Mellon University

This talk will be held on Tuesday, January 24th, at 12:45 p.m. in Stanley Thomas 316. Please note the special weekday for this event.

Abstract: My research is based on the observation that Mixed Reality blurs the boundary between virtuality and reality, and significantly impacts how humans perceive the world and themselves. To understand this impact, I conduct user studies to observe and model the user’s behavioral and perceptual patterns when they interact with Mixed Reality. Based on the obtained findings, I build interaction techniques to facilitate the two way communication of how users deliver the interaction intents to the computer and how the computer displays the information and feedback to the user. More specifically, I develop multimodal input techniques leveraging the user’s hand gestures, head movements and facial expressions and adaptive user interfaces considering the user’s mental state and environmental context to optimize the information display. A step further, I explore leveraging the unique behavioral patterns that users have in Mixed Reality to enhance their interactions. In previous projects, I enabled users to embody healthier virtual avatars and non-humanoid avatars to gain novel experiences that are less possible to have in reality.  

About the Speaker: My name is Yukang Yan. I’m a postdoc at Carnegie Mellon University, working with David Lindlbauer. Before that, I obtained my Ph.D. degree at Tsinghua University under the supervision of Yuanchun Shi. My research interests lie in the intersection area between Human-Computer Interaction (HCI) and Mixed Reality (MR). I publish at top HCI venues (ACM CHI, UIST, Ubicomp) and MR venues (IEEE VR), with one Best Paper Honorable Mention Award at CHI 2020. I served on the program committee of CHI 22-23, UIST 21-22, and the SIGCHI Communications Committee. 

Jan 30

Content Experience in 3D Virtual Reality Worlds: Lens on Learning Transformation

Gayathri Sadanala | Missouri Valley College

This talk will be held on Monday, January 30th in Boggs 600 at 3:00 p.m. Please note the special weekday for this event.

Abstract: A little over the past two decades, the advent of digital technology has transformed online learning drastically. With the increase of online students every year and especially during the period of pandemic that suggests social distancing, online orientation is necessary for the students to have a smooth transition into online learning. This talk will present the important factors that contribute successful student learning, engagement, and experiences within the educational 3d virtual reality orientation and research contribution to intersection of education, human computer interaction and computer science.  

About the Speaker: Gayathri Sadanala is an Assistant professor at Missouri Valley College. She is a doctoral candidate at the University of Missouri majoring in Human-Computer Interaction from Information Science and Learning Technologies. She holds Master’s in Engineering Technology and bachelor’s in computer science Engineering. Her research interests are interactions in Virtual Reality Environments, data science, visualization, Usability, and User experience studies

Feb 7

Designing Adaptive and Context-aware AR/VR Interactions

Rawan Alghofaili | George Mason University

This talk will be held on Tuesday, February 7th, at 12:45 p.m. in Stanley Thomas 316. Please note the special weekday for this event.

Abstract: Anyone who has witnessed the adoption of the Internet remembers the static and non-context-aware websites of the past. Compare that with the powerful engines behind the websites of today. These context-aware engines are equipped with machine learning and optimization algorithms that allow them to adapt and cater to their user's behavior and environment. This deep understanding of the user and their needs created a more personalized and efficient experience. Current AR/VR systems are not quite as static as the websites of yesteryear but still a long way to go from becoming as powerful and context-aware as browsing the web today. Rawan aspires to facilitate the road to achieving context-aware AR/VR systems that elegantly adapt their interactions to their user's behavior and environment. Rawan will discuss her work in AR/VR adaptive navigation aids, VR environment design via visual attention, and in-situ mobile AR content-authoring via 2D to 3D curve projection.  

About the Speaker: Rawan Alghofaili received her Ph.D. in Computer Science graduate from George Mason University under the mentorship of Professor Craig (Lap-Fai) Yu. She received an M.S. from the University of Massachusetts-Boston in 2018 and an MEng. from Cornell University in 2014. Rawan was a research intern with Adobe's Creative Intelligence lab in the Summer of 2020 and a research intern with Meta Reality Labs in the Summer and Fall of 2021. Her work focuses on creating context-aware adaptive interactions for AR/VR via computational interaction design, which involves blending the wonderful worlds of ML, Computer Graphics, and HCI. 

Feb 9

Holistic Data Protection: Policy-based Privacy-preserving Data Management

Primal Pappachan | Pennsylvania State University

Abstract: The era of Big Data, AI, and the IoT is resulting in a significant increase in the generation of personally identifying data. Protecting personal information has become more critical than ever, especially as new privacy laws like the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and the Virginia Consumer Data Protection Act (VCDPA) recognize the importance of both privacy and security in data management. However, effective data protection involves more than just addressing security challenges like unauthorized access. It also requires addressing privacy concerns, including the risk of inferences that can potentially threaten individuals' anonymity and safety.

In this talk, I will discuss my research on designing and implementing data protection methods that take into account the trade-offs between privacy, utility, and user customization. First, I will delve into policy-based approaches, which use access control as their primary defense against unauthorized access to personal data and provide users with the ability to customize their own data protection preferences. I will also explore the challenges of scalability and inference control and present algorithmic solutions that can help address these challenges. Next, I will present my work on a privacy-preserving mechanism based on Differential Privacy, which provides stronger guarantees for protecting personal data under certain assumptions. Specifically, I will discuss a new privacy-preserving mechanism for location privacy, which offers end-user customization while preserving strong privacy guarantees.

Lastly, I will outline my research plans for developing the next generation of data management systems, which integrate these two data protection methods to support secure, private, and reliable computing for individuals. Through this holistic approach, my work aims at providing a balanced approach to data protection that prioritizes both security and privacy, while also providing users with the ability to customize their own data protection preferences.  

About the Speaker: Primal Pappachan is a Postdoctoral Scholar at Penn State University's College of Information Sciences and Technology, where he works with Professor Anna Squicciarini. He received his Ph.D. in Computer Science from the University of California, Irvine in 2021, where he was advised by Professor Sharad Mehrotra. Primal's research focuses on data management and privacy, particularly on designing and implementing data protection mechanisms. He has published his work in top-tier venues including VLDB and SIGSPATIAL and has received awards to support his postdoctoral work including the Penn State Center for Security Research and Education Seed Grant and the Penn State Center for Socially Responsible AI Collaboration Pilot Funding. In addition to his research, Primal has served on program committees for conferences and workshops and is currently co-organizing a workshop (ASTRIDE) at ICDE 2023.  

Feb 13

Knowledge in Pieces and Computing

Suleman Mahmood | University of Illinois at Urbana Champaign

This talk will be held on Monday, February 13th in Boggs 600 at 1:00 p.m. Please note the special weekday, venue, and time for this event.

Abstract: Knowledge in pieces is one of the prominent perspectives on how students access different concepts to solve a problem. This perspective emphasizes the role of context in how students solve problems. This perspective has been used in computing education research to explain the fragile nature of students’ knowledge in early programming classes. In my presentation, I will explain knowledge in pieces perspective of conceptual change. I will discuss several examples from prior research where knowledge in pieces perspective has been used to explain the observations about how students solve problems. I will also describe how I used this theoretical framework explains my own observation of students’ knowledge of cache memories. In the end I will discuss, I will discuss the implications of knowledge in pieces perspective for classroom teaching.

About the Speaker: Suleman Mahmood is a Ph.D. candidate in the Department of Computer Science at University of Illinois Urbana-Champaign. He is interested in exploring how students learn computing concepts, effectiveness of assessments and building tools to help students learn. His current research is focused on exploring how students learn caches within the context of an undergraduate computer science course. He has also developed tools to help students learn memory system performance analysis and coding techniques that can improve memory system performance. He has been included in the list of students ranked excellent by students twice at University of Illinois. Suleman completed his master’s in computer science at Lahore University of Management Sciences. He also worked as a software engineer for 5 years.

Feb 14

Party Tricks: On Primaries and Gerrymandering

Omer Lev | Ben-Gurion University of the Negev

This talk will be held on Tuesday, February 14th, at 1:00 p.m. in Stanley Thomas 316. Please note the special weekday and time for this event.

Abstract: In the last few years, the effects that primaries and gerrymandering have on election outcomes became a commonly debated issue, particularly in the US. Both primaries (candidate selection by parties) and district-based elections are examples of mechanisms that involve adding stages to a decison-making procedure. I will discuss several results on how these particular stages change elections, including the effect parties may have on the quality of winners, how population distribution affects the possibility of manipulation of the districts ("gerrymandering power”), and some new results showing gerrymandering effects in real-world data.

Joint work (in different papers) with Yoram Bachrach, Allan Borodin, Yoad Lewenberg, Jeffrey S. Rosenschein, Nisarg Shah, Tyrone Strangway, and Yair Zick.

About the Speaker: Omer Lev is a faculty member at the department of Industrial Engineering and Management in Ben-Gurion University of the Negev. Before joining Ben-Gurion University, he did his post doctoral fellowship at the department of Computer Science of the University of Toronto, following his PhD at the Hebrew University. Omer is interested in various areas of AI and theoretical computer science, mostly involving a game-theoretical analysis trying to understand various phenomena, while keeping the research closely related to real-world data and observable behavior. Lately, his focus has been on multi-staged decision mechanisms, peer evaluation, and crowd-activities.

Feb 24

Data Science for Software Engineering

Tung Thanh Nguyen | Auburn University

This talk will be held on Friday, February 24th, in Boggs 102 at 1:00 p.m. Please note the special weekday, venue, and time for this event.

Abstract: I believe that software and data are changing the world, as said by famous quotes "Software is eating the world" and "Data is feeding the world". Therefore, I am doing research on software and data, especially, how to use data to help people build software faster, smarter, and safer. In this talk, I will present our intelligent tool for code completion as a representative example. Our tool uses hidden Markov models to learn programming patterns from more than 200,000 mobile apps. It then uses those models to provide context-sensitive method call suggestions. The empirical evaluation shows that it is highly effective with top-3 accuracy of 90%.

About the Speaker: Dr. Tung Nguyen is an Assistant Professor from the Department of Computer Science and Software Engineering at Auburn University. He has a PhD in Computer Engineering from Iowa State University. He has more than 10 years of research and teaching experience in Software Engineering and Data Science. He has published nearly 70 peer-reviewed papers at the top venues in the field and won three Best Paper Awards.

Feb 28

Enabling Visualization of Data Analysis Programs

Rebecca Faust | Virginia Tech

This talk will be held on Tuesday, February 28th, at 12:45 p.m. in Stanley Thomas 316. Please note the special weekday for this event.

Abstract: Data analysis methods inherently lose information when moving from initial data to final results, such as context from the underlying data features. The loss of such information inhibits analysts' ability to reason about the processes performed on their data and interpret the results. In this talk, I will discuss my efforts to design visualization and interaction methods that re-expose some of the information lost during analysis. First, I will focus on using visualization to re-introduce underlying data features in dimension reductions. I will demonstrate how my DimReader method provides lost context of the underlying data features via gradient visualization. Additionally, I will discuss how explainable, interactive projections provide context while enabling data exploration through feature-based reorganization. Then, I will broaden the scope and illustrate how visualizing analysis scripts, via my Anteater method, provides context from the internal script data that illuminates how the analysis reaches its final results.

About the Speaker: Rebecca Faust is a postdoctoral researcher and Computing Innovations Fellow in the Sanghani Center for AI and Data Analytics at Virginia Tech working with Dr. Chris North. She received her Ph.D. from the University of Arizona, where she was advised by Dr. Carlos Scheidegger. Her research interests lie at the intersection of data visualization and data analysis, with an emphasis on designing interactive visualization methods to enable explainability and interpretability of analysis methods and results.

Mar 2

Designing AI-Based Applications to Benefit Deaf and Hard-of-Hearing Individuals and Sign-Language Users: A Human–Computer Interaction Perspective

Saad Hassan | Rochester Institute of Technology

This talk will be held on Thursday, March 2nd, at 12:45 p.m. in Stanley Thomas 316.

Abstract: Recent advancements in speech and language-based technologies have opened up new opportunities to design innovative technologies for Deaf and hard-of-hearing people and other sign-language users. In this talk, I will discuss my research at the intersection of accessible computing, human-computer interaction (HCI), and computational linguistics. This includes three main areas of research: sign-language interfaces, caption evaluation and enhancements, and detection of intersectional ableist biases in language models. To investigate designing and evaluating sign-language technologies, I have used HCI methodologies such as interviews, prototyping, task-based experiments, and observational studies. DHH users face challenges during conversational interaction or when viewing media, and individuals who are learning sign-language face challenges in looking up the meaning of unfamiliar sign-language words. To address the challenges experienced by both these groups, I have explored the best ways to structure the user experience, given the limitations of current sign-recognition technologies. This talk will focus on two systems I have developed for learners and two systems for DHH individuals, and I will share findings from studies that revealed how these systems benefit sign-language users. This research required collaboration with AI researchers, two Deaf professional organizations, and industry research partners. Finally, I discuss my future plans to address barriers faced by people with disabilities when using technologies or accessing information, to support their social interactions in digital spaces.

About the Speaker: Saad Hassan ( is a Ph.D. candidate in the Computing and Information Science program at Rochester Institute of Technology; his research focuses on Human-computer Interaction, Computing Accessibility, and AI for Social Good. This involves designing and evaluating linguistic and speech technologies to benefit individuals with disabilities, with a focus on deaf or hard-of-hearing (DHH) individuals. He has also worked on the critical and interpretive analysis of language models to uncover intersectional biases against individuals within language models. During his graduate studies, he has worked as a research scientist intern and visiting researcher with the sign-language understanding group at Google AI and audio experiences and central accessibility teams at Meta Reality Labs, where he helped build, evaluate, and deploy innovative technologies. Saad has published research at highly selective computing venues, including ACM Conference on Human Factors in Computing Systems (CHI), ACM SIGACCESS Conference on Computers and Accessibility (ASSETS), ACM Transactions on Accessible Computing (TACCESS), and Conference on Empirical Methods in Natural Language Processing (EMNLP). In 2022, Saad was awarded a Duolingo dissertation research grant to support his future research on language learning with technology. The same year, he received a best paper nomination at ASSETS. He has served on the program committees for ASSETS and CHI. Saad has served as a teaching assistant for several graduate and undergraduate courses and mentored more than 22 undergraduate and graduate students.

Apr 11

Machine Learning and Physically Based Modeling Application to Coastal and Riverine Environments

Ehab Meselhe, Kelin Hu, and Laura Manuel | River-Coastal Science and Engineering, Tulane University

This talk will be held on Tuesday, April 11th, at 12:45 p.m. in Stanley Thomas 316. Please note the special weekday for this event.

Abstract: Coastal and riverine environments are facing challenges due to a host of natural and anthropogenic factors. Climate change, aggressive urbanization, and improperly designed grey infrastructure are examples of factors that impacting natural systems. Computer models are widely used to improve our understanding of system dynamics, evaluate restoration strategies, develop long term plans. They are also used for real time forecasting. Advancements in computer modeling include integration of physical, biological and socioeconomic components. These complex models, especially when applied to large spatial scales and for long-term simulations, require substantial computational resources. Machine learning tools are an attractive tool for environmental applications to supplement physically-based models. In this presentation, we share modeling applications for hurricanes, compound flooding, and coastal restoration strategies. We will examine impacts on marsh creation, health of marine mammals, sustainability of oysters, and land loss rates.

About the Speakers: Laura Manuel is a 3rd year PhD student at the River-Coastal Science and Engineering (RCSE) Department. Her research focus is on real-time forecasting and evaluation of restoration strategies. Kelin Hu is an Assistant Professor-Research at RCSE. He is an expert in coastal modeling applications especially storm surge, hydrodynamics, waves, and water quality. Ehab Meselhe is a Professor at RCSE. His research focus is development and application of computer models to coastal, deltaic and riverine systems.

Apr 13

Visual Cross-Domain Adaption Under Various Data Access Privileges

Haifeng Xia | Computer Science PhD Student, Tulane University

Please join us for Haifeng Xia’s PhD dissertation defense talk, which is described below.

This talk will be held on Thursday, April 13th, at 2:00 p.m. CST in Gibson Hall, Room # 308. Please note the special time and venue for this event.

Abstract: Deep neural networks have achieved tremendous progress on solving computer vision tasks such as image classification and object detection by training model on large-scale label-sufficient dataset. However, the manual annotations on abundant instances are laborious and expensive and distribution shift across training (source) and test (target) sets significantly reduces model performance. These causes hinder the widespread application of deep learning techniques. To breakthrough this bottleneck, transfer learning is a feasible and effective solution paradigm by borrowing and adapting the existing source knowledge into target domain with the similar tasks. This thought facilitates the exploration on unsupervised domain adaptation using source and target samples to fulfill cross-domain alignment. Although UDA methods have indicated powerful ability on overcoming cross-domain learning problem, accessing data of all domains is unrealistic and impossible in real-world industrial applications. Hence, this dissertation mainly discusses domain adaptation under various data access privileges and effectively overcomes these scenarios including source-free domain adaptation, imbalanced domain generalization and incomplete multi-view domain adaptation.

Keywords: Domain Adaptation, Data Access Privileges, Knowledge Transfer

Apr 21

Title: VR Implementations in Computer Science

Gulsebnem Bishop| Campbellsville University

This talk will be held on Friday, April 21st, at 3:00 p.m. Venue: TBA. Please note the special weekday for this event.

Abstract: The most challenging aspect of teaching is still delivering the course material to the students in an environment where engagement and learning take place. So many new technologies are at our disposal to use, and we are reluctant to use them for many reasons. Implementing new technologies is scary. As instructors, we think we need to have complete mastery of these technologies before we even think about implementing them in our classrooms. We try to learn as time permits. We do our best to develop new and exciting solutions to eliminate boredom and promote student interaction and excitement. We get lost. We quit. In this presentation, Dr. Bishop talks about how she started implementing VR in her classroom, why it is important to use these new technologies and how we can implement them in computer science education as well as other disciplines. We will have time to do a demo session of a classroom environment using a visor at the end of the talk.

About the Speaker: Dr. Gulsebnem (Sheb) Bishop is currently teaching at Campbellsville University, as an Assistant Professor of Computer Information Systems, in the Department of Business, Economics, and Technology. She graduated with a doctorate in Computer Science and Information Systems from Pace University, NY, in 2006. She holds an MBA and a Cybersecurity degree from Stratford University, VA. She is a certified PMP and serves as an at-large board member in the Syracuse PMI chapter. Her area of interest is software development, design and security, data warehousing and database administration, and data analytics. Dr. Bishop worked in non-profit IT, specifically in database administration, for over thirteen years and in other IT areas, such as web design, systems integration/conversion, and IT project management. She worked for several non-profits in New York and Washington, DC (American Museum of Natural History, NAACP, Legal Defense and Education Fund, and Human Rights Campaign). Dr. Bishop is currently working on incorporating VR technology into her own classes and experimenting with different platforms.

Apr 25

Computational Models of User Engagement with Online News

Karthik Shivaram | Computer Science PhD Student, Tulane University

Please join us for Karthik Shivaram’s PhD dissertation defense talk, which is described below.

This talk will be held on Tuesday, April 25th, at 1:00 p.m. CST in Stanley Thomas Hall, Room # 316. Please note the special date and time for this event.

Abstract: The shift from traditional print media to online platforms has revolutionized the way people consume and engage with current events. To enhance user involvement, these platforms typically employ personalization algorithms like recommendation systems, that learn about users’ preferences from their past interactions and suggest relevant content. Nevertheless, the use of such algorithms may result in biased engagement patterns caused by confounded data, leading to concerns about "filter bubbles" and "echo chambers". Such entities cause users to be over-exposed to information that conforms with their pre-existing beliefs while limiting exposure to opposing viewpoints. As a result, this type of news consumption habits can bias users, leading to negative consequences such as the hyper- partisanship, online polarization, and the spread of misinformation. In this dissertation we aim to better understand factors that affect short-term and long-term news engagement behavior on social media. To achieve this, we conduct simulation studies to understand which aspects of recommendation systems contribute to filter bubble formation. We propose attention-based neural networks to mitigate these effects in content-based recommenders. In addition, long-term news engagement behavior is examined by analyzing observational data collected from Twitter over a decade. Our analysis focuses on a specific type of engagement behavior where users exhibit distrust towards the news media they engage with and examine its impact on engagement diversity. Finally, we propose forecasting methods to predict future news engagement behavior of users which reveal factors that shape long-term news consumption habits on social media. 

Apr 27

Augmented Reality for Computer Science Education

Sing Chun Lee | Johns Hopkins University

This talk will be held on Thursday, April 27th, at 1:00 p.m. in Stanley Thomas 316. Please note the special time for this event.

Abstract: Augmented Reality (AR) has revolutionized how we interact with the world. By overlaying virtual digital content into the real world, AR enriches and supplements our sensing, opening up new ways to interact with the environment around us. Nowadays, we see AR everywhere, such as in the football game on TV (the virtual 10-yard line), in entertainment (Pokemon Go), and in healthcare (enhanced medical data visualization), etc. In education, AR promotes active learning and improves students’ engagement. In this talk, we will first see some examples of AR and discuss two key components of successful AR applications – calibration/registration and user acceptance. Then, we will explore the directions for applying AR to computer science education – where and how we should integrate AR to improve learning outcomes and make learning more fun.

About the Speaker: Sing Chun Lee is a PhD Candidate at Johns Hopkins University, where he discovered his passion for teaching and is interested in integrating research into undergraduate education. His research focuses on Augmented Reality and Geometry Processing, specifically on intuitive data augmentation and geometric calculus. At Hopkins, he received Professor Joel Dean Excellence in Teaching Award in 2022, recognizing his effort and dedication to improving undergraduate computer science education. Besides teaching and research, he immerses himself in virtual reality escape room games.


Fall 2022 Colloquia

Sept 6

Introduction to NSF OAC Research Programs Supporting Software and Data Cyberinfrastructure Development

Seung-Jong Jay Park | Louisiana State University/National Science Foundation

This talk will be held on Tuesday, September 6th, at 1:00 p.m. in Boggs 600. Please note the special weekday, time, and venue for this event.

About the Speaker: Dr. Seung-Jong Jay Park is the Dr. Fred H. Fenn Memorial Professor of Computer Science and Engineering at Louisiana State University, where he has worked in cyberinfrastructure development for large-scale scientific and engineering applications since 2004. He received Ph.D. in the school of Electrical and Computer Engineering from the Georgia Institute of Technology. He has performed interdisciplinary research projects including (1) big data and deep learning research including developing software frameworks for large-scale science applications; and (2) cyberinfrastructure development using cloud computing, high-performance computing, and high-speed networks. Those projects have been supported by federal and state funding programs including NSF, NASA, NIH, ONR, and AFRL. He received IBM faculty research awards between 2015-2017. He also served an associate director for the Center for Computation and Technology of LSU between 2016-2018. Since 2021 he has served at the U.S. National Science Foundation (on leave from LSU) as a program director managing research support programs, such as Cyberinfrastructure for Sustained Scientific Innovation (CSSI), Principles and Practice of Scalable Systems (PPoSS), Computational and Data-Enabled Science and Engineering (CDS&E), and others.

Nov 28

Recent Advances in Robust Machine Learning

Masashi Sugiyama| RIKEN/The University of Tokyo

This talk will be held on Monday, November 28th, at 11:00 a.m. in Boggs 600. Please note the special weekday, time, and venue for this event.

Abstract: When machine learning systems are trained and deployed in the real world, we face various types of uncertainty. For example, training data at hand may contain insufficient information, label noise, and bias. In this talk, I will give an overview of our recent advances in robust machine learning, including weakly supervised classification (positive-unlabeled classification, positive-confidence classification, complementary-label classification, etc), noisy label learning (noise transition estimation, instance-dependent noise, clean sample selection, etc.), and domain adaptation (joint importance-predictor learning for covariate shift adaptation, dynamic importance-predictor learning for full distribution shift, etc.).  

About the Speaker: Masashi Sugiyama received a Ph.D. in Computer Science from Tokyo Institute of Technology in 2001. He has been a Professor at the University of Tokyo since 2014 and concurrently Director of the RIKEN Center for Advanced Intelligence Project (AIP) since 2016. His research interests include theories and algorithms of machine learning. He served as Program Co-chairs for Neural Information Processing Systems (NeurIPS) Conference in 2015, International Conference on Artificial Intelligence and Statistics (AISTATS) in 2019, and Asian Conference on Machine Learning (ACML) in 2010 and 2020. He (co)authored Machine Learning in Non-Stationary Environments (MIT Press, 2012), Density Ratio Estimation in Machine Learning (Cambridge University Press, 2012), and Machine Learning from Weak Supervision (MIT Press, 2022).

Dec 8

The National Science Data Fabric: Democratizing Data Access for Science and Society

Valerio Pasucci| University of Utah

This talk will be held at 3:30 p.m. in Stanley Thomas 302. Please note the special time for this event. 

Abstract: Effective use of data management techniques for the analysis and visualization of massive scientific data is a crucial ingredient for the success of any experimental facility, supercomputing center, or cyberinfrastructure that supports data-intensive scientific investigations. Data movements have become a central component that can enable or stifle innovation in the progress towards high-resolution experimental data acquisition (e.g., APS, SLAC, NSLS II). However, universal data delivery remains elusive, limiting the scientific impacts of these facilities. This is particularly true for high-volume/high-velocity datasets and resource-constrained institutions.

This talk will present the National Science Data Fabric (NSDF) testbed, which introduces a novel trans-disciplinary data fabric integrating access to and use of shared storage, networking, computing, and educational resources. The NSDF technology addresses the key data management challenges involved in constructing complex streaming workflows that take advantage of data processing opportunities that may arise while data is in motion. This technology finds practical use in many research and industrial applications, including materials science, precision agriculture, ecology, climate modeling, astronomy, connectomics, and telemedicine.

This NSDF overview will include several techniques that allow building a scalable data movement infrastructure for fast I/O while organizing the data in a way that makes it immediately accessible for processing, analytics, and visualization with resources from Campus Computing Cybeinfrastructures, the Open Storage Network, the Open Science Grid, NSF/DOE leadership computing facilities, the CloudLab, Camelion, and Jetstream, just to name a few. For example, I will present a use case for the real-time data acquisition from an Advanced Photon Source (APS) beamline to allow remote users to monitor the progress of an experiment and direct integration in the Materials Commons community repository. We accomplish this with an ephemeral NSDF installation that can be instantiated via Docker or Singularity at the beginning of the experiment and removed right after. In general, the advanced use of containerized applications with automated deployment and scaling makes the practical use of clients, servers, and data repositories straightforward in practice, even for non-expert users. Full integration with Python scripting facilitates the use of external libraries for data processing. For example, the scan of a 3D metallic foam can be easily distributed with the following Jupyter notebook

Overall, this leads to building flexible data streaming workflows for massive imaging models without compromising the interactive nature of the exploratory process, the most effective characteristic of discovery activities in science and engineering. The presentation will be combined with a few live demonstrations of the same technology including notebooks which are being used to provide undergraduate students of a minority-serving institution (UTEP) with real-time access to large-scale data normally used only by established scientists in well-funded research groups. About the Speaker: Valerio Pascucci is the Inaugural John R. Parks Endowed Chair, the founding Director of the Center for Extreme Data Management Analysis and Visualization (CEDMAV), a Faculty of the Scientific Computing and Imaging Institute, and a Professor of the School of Computing of the University of Utah. Valerio is also the President of ViSOAR LLC, a University of Utah spin-off, and the founder of Data Intensive Science, a 501(c) nonprofit providing outreach and training to promote the use of advanced technologies for science and engineering. Before joining the University of Utah, Valerio was the Data Analysis Group Leader of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory and an Adjunct Professor of Computer Science at the University of California, Davis. Valerio's research interests include Big Data management and analytics, progressive multi-resolution techniques in scientific visualization, discrete topology, and compression. Valerio is the coauthor of more than two hundred refereed journal and conference papers and was an Associate Editor of the IEEE Transactions on Visualization and Computer Graphics.

About the Speaker: Valerio Pascucci is the Inaugural John R. Parks Endowed Chair, the founding Director of the Center for Extreme Data Management Analysis and Visualization (CEDMAV), a Faculty of the Scientific Computing and Imaging Institute, and a Professor of the School of Computing of the University of Utah. Valerio is also the President of ViSOAR LLC, a University of Utah spin-off, and the founder of Data Intensive Science, a 501(c) nonprofit providing outreach and training to promote the use of advanced technologies for science and engineering. Before joining the University of Utah, Valerio was the Data Analysis Group Leader of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory and an Adjunct Professor of Computer Science at the University of California, Davis. Valerio's research interests include Big Data management and analytics, progressive multi-resolution techniques in scientific visualization, discrete topology, and compression. Valerio is the coauthor of more than two hundred refereed journal and conference papers and was an Associate Editor of the IEEE Transactions on Visualization and Computer Graphics. 

Dec 13

Interdisciplinary Project Presentations

Fang Qi, Linsen Li,  and Xiaolin Sun | Computer Science PhD Students, Tulane University

This event will be held at 12:30 p.m. in Stanley Thomas 302. Please note the special weekday for this event.  

Fang Qi

Quantum Vulnerability Analysis to Accurately Estimate the Quantum Algorithm Success Rate

Abstract: Quantum technology is still in its infancy, but superconducting circuits have made great progress toward pushing forward the computing power of the quantum state of the art. Due to limited error characterization methods and temporally varying error behavior, quantum operations can only be quantified to a rough percentage of successful execution, which fails to provide an accurate description of real quantum execution in the current noisy intermediate-scale quantum (NISQ) era. State-of-the-art success rate estimation methods either suffer from significant prediction errors or unacceptable computation complexity. Therefore, there is an urgent need for a fast and accurate quantum program estimation method that provides stable estimation with the growth of the program size. Inspired by the classical architectural vulnerability factor (AVF) study, we propose and design Quantum Vulnerability Factor (QVF) to locate any manifested error which generates Cumulative Quantum Vulnerability (CQV) to perform SR prediction. By evaluating it with well-known benchmarks on three 27-qubit and one 65-qubit quantum machines, CQV outperforms the state-of-the-art prediction technique ESP by achieving on average 6 times less relative prediction error, with best cases at 20 times, for benchmarks with a real SR rate above 0.1%.

Linsen Li

Online Reviews Are Leading Indicators of Changes in K-12 School Attributes

Abstract: School rating websites are increasingly used by parents to assess the quality and fit of U.S. K-12 schools for their children. These online reviews often contain detailed descriptions of a school’s strengths and weaknesses, which both reflect and inform perceptions of a school. Existing work on these text reviews has focused on finding words or themes that underlie these perceptions, but have stopped short of using the textual reviews as leading indicators of school performance. In this paper, we investigate to what extent the language used in online reviews of a school is predictive of changes in the attributes of that school, such as its socio-economic makeup and student test scores. Using over 300K reviews of 70K U.S. schools from a popular ratings website, we apply language processing models to predict whether schools will significantly increase or decrease in an attribute of interest over a future time horizon. We find that using the text improves predictive performance significantly over a baseline model that does not include text but only the historical time-series of the indicators themselves, suggesting that the review text carries predictive power. A qualitative analysis of the most predictive terms and phrases used in the text reviews indicates a number of topics that serve as leading indicators, such as diversity, changes in school leadership, a focus on testing, and school safety.  

Xiaolin Sun

Pandering in a (Flexible) Representative Democracy

Abstract: In representative democracies, the election of new representatives in regular election cycles is meant to prevent corruption and other misbehavior by elected officials and to keep them accountable in service of the “will of the people." This democratic ideal can be undermined when candidates are dishonest when campaigning for election over these multiple cycles or rounds of voting. Much of the work on COMSOC to date has investigated strategic actions in only a single round. We introduce a novel formal model of pandering, or strategic preference reporting by candidates seeking to be elected and examine the resilience of two democratic voting systems to pandering within a single round and across multiple rounds. The two voting systems we compare are Representative Democracy (RD) and Flexible Representative Democracy (FRD). For each voting system, our analysis centers on the types of strategies candidates employ and how voters update their views of candidates based on how the candidates have pandered in the past. We provide theoretical results on the complexity of pandering in our setting for a single cycle, formulate our problem for multiple cycles as a Markov Decision Process, and use reinforcement learning to study the effects of pandering by both single candidates and groups of candidates across a number of rounds.  


Previous Colloquia



Spring 2022 Colloquia

January 24

AI-Driven Big Data Analytics

Justin Zhan | University of Arkansas

Abstract: Data has become the central driving force to new discoveries in science, informed governance, insight into society, and economic growth in the 21st century. Abundant data is a direct result of innovations including the Internet, faster computer processors, cheap storage, the proliferation of sensors, etc, and has the potential to increase business productivity and enable scientific discovery. However, while data is abundant and everywhere, people do not have a fundamental understanding of data. Traditional approaches to decision making under uncertainty are not adequate to deal with massive amounts of data, especially when such data is dynamically changing or becomes available over time. These challenges require novel techniques in AI-driven data analytics, In this seminar, a number of recent funded AI-driven big data analytics projects will be presented to address various data analytics, mining, modeling and optimization challenges. 

About the Speaker: Dr. Justin Zhan is the Arkansas Research Alliance Scholar and Professor of Data Science at the Department of Computer Science and Computer Engineering, University of Arkansas. He is the Director of Data Science, Arkansas Integrative Metabolic Research Center. He is a joint professor at the Department of Biomedical Informatics, School of Medicine, University of Arkansas for Medical Sciences. He received his PhD degree from University of Ottawa, Master degree from Syracuse University, and Bachelor degree from Liaoning University of Engineering and Technology. His research interests include Data Science, Biomedical Informatics, Deep Learning & Big Data Analytics, Cyber Security & Blockchain, Network Science & Social Computing. He has served as a conference general chair, a program chair, a publicity chair, a workshop chair, or a program committee member for over one-hundred and fifty international conferences and an editor-in-chief, an editor, an associate editor, a guest editor, an editorial advisory board member, or an editorial board member for about thirty journals. He has published 246 articles in peer-reviewed journals and conferences and delivered 30 keynote speeches and invited talks. His research has been extensively funded by National Science Foundation, Department of Defense, and National Institute of Health.

January 28

The Privacy Policy Permission Model: A Methodology for Modeling Privacy Policies

Maryam Majedi |University of Calgary

This event will be held on Friday, January 28th. Please note the special weekday for this event.

Abstract: Organizations use privacy policies to communicate their data collection practices to their clients. A privacy policy is a set of statements that specifies how an organization gathers, uses, discloses, and maintains clients' data. However, most privacy policies lack a clear, complete explanation of how data providers' information is used. In this talk, I will present a modeling methodology, called the Privacy Policy Permission Model (PPPM), that provides a uniform, easy-to-understand representation of privacy policies, which can show how data is used within an organization's practice.

About the Speaker: Dr. Maryam Majedi completed a teaching stream postdoc at the University of Toronto, where she worked with the Embedded Ethics Education Initiative (E3I) team and developed and delivered ethics modules for computer science courses. Dr. Majedi completed her Ph.D. in data privacy at the University of Calgary, where she introduced a new technique to model privacy policies. She holds a M.Sc. in High-Performance Scientific Computing from the University of New Brunswick and a Fellowship in Medical Innovation from Western University.

January 31

Designing the Next Generation of User Interfaces for Children

Julia Woodward | University of Florida

Abstract: Children are increasingly interacting with technology, such as touchscreen devices, at home, in the classroom, and at museums. However, these devices are not designed to take into account that children interact with devices differently than adults. As children’s everyday use of technology increases, these devices need to be tailored towards children. In this talk, I will present research exploring the differences between how children and adults interact and think about different technology. Our findings lead to a better understanding of how to design technology for children. I will also present some recent work examining how to design information in augmented reality (AR) headsets for both children’s and adults’ task performance. I will conclude with some takeaways and plans for future work in designing the next generation of user interfaces for children.

About the Speaker: Julia Woodward is a Doctoral Candidate studying Human-Centered Computing in the Department of Computer and Information Science and Engineering at the University of Florida, as well as a National Science Foundation Graduate Research Fellow. Her main research areas include examining how to design better user interfaces tailored towards children, and understanding how children think about and use technology. Through her research, she has identified specific differences between how adults and children interact with technology and has provided recommendations for designing technology for children. Her current dissertation work focuses on understanding how to design information in augmented reality (AR) headsets to aid in adults’ and children’s task performance and how it differs between the two populations. Julia is graduating this year and plans to continue researching and designing technology tailored for children.

February 4

Personality and Emotion in Strong-Story Narrative Planning

Alireza Shirvani |Saint Louis University

This talk will be held online only on Friday, February 4th, at 4:00 p.m. CST. Please note the special weekday for this event. Zoom details will be provided via the announcement listserv, or you may email to request the corresponding link.

Abstract: Interactive virtual worlds provide an immersive and effective environment for training, education, and entertainment purposes. Virtual characters are an essential part of every interactive narrative. I propose models of personality and emotion that are highly domain independent and integrate those models into multi-agent strong-story narrative planning systems. My models of emotion and personality enable the narrative generation system to create more opportunities for players to resolve conflicts using certain behavior types. In doing so, the author can encourage the player to adopt and exhibit those behaviors.

About the Speaker: Dr. Alireza Shirvani is a visiting professor at Saint Louis University in the Department of Computer Science. He received his PhD in Computer Science from the University of Kentucky in 2021. His research focuses on Computational Narrative, with a more general interest in Artificial Intelligence for Games. He is particularly interested in generating believable behavior by integrating emotion and personality into virtual characters. One of his major projects, called Camelot, provides a free easy-to-use 3D tool to visualize interactive stories. This engine acts as a presentation layer to an external program, called the experience manager, which can be written in any programming language.

February 14

Overcoming Heterogeneity in Autonomous Cyber-Physical Systems

Ivan Ruchkin |University of Pennsylvania

Abstract: From autonomous vehicles to smart grids, cyber-physical systems (CPS) play an increasingly important role in today's society. Often, CPS operate autonomously in highly critical settings, and thus it is imperative to engineer these systems to be safe and trustworthy. However, it is particularly difficult to do so due to CPS heterogeneity -- the high diversity of components and models used in these systems. This heterogeneity substantially contributes to fragmented, incoherent assurance as well as to inconsistencies between different models of the system.

This talk will present two complementary techniques for overcoming CPS heterogeneity: confidence composition and model integration. The former technique combines heterogeneous confidence monitors to produce calibrated estimates of the run-time probability of safety in CPS with machine learning components. The latter technique discovers inconsistencies between heterogeneous CPS models using a logic-based specification language and a verification algorithm. The application of these techniques will be demonstrated on an unmanned underwater vehicle and a power-aware service robot. These techniques serve as stepping stones towards the vision of engineering autonomous systems that are aware of their own limitations.

About the Speaker: Ivan Ruchkin is a postdoctoral researcher in the PRECISE center at the University of Pennsylvania. He received his PhD in Software Engineering from Carnegie Mellon University. His research develops integrated high-assurance methods for modeling, analyzing, and monitoring modern cyber-physical systems. His contributions were recognized with multiple Best Paper awards, a Gold Medal in the ACM Student Research Competition, and the Frank Anger Memorial Award for crossover of ideas between software engineering and embedded systems. More information can be found at

February 21

Using Kernel-level Data Provenance for Intrusion Detection

Xueyuan (Michael) Han |Harvard University

Abstract: Attacks today are increasingly difficult to detect and their damage continues to skyrocket. For example, it takes an average of over 200 days to identify a data breach and costs about $4 million to rectify. More than 18,000 organizations were affected in the late 2020 SolarWinds supply chain attack. Devastating attacks that make headlines (e.g., Equifax, Target, and Kaseya) are no longer isolated, rare incidents.

In this talk, I will present my work on leveraging kernel-level data provenance to detect system intrusions. Kernel-level data provenance describes system activity as a directed acyclic graph that represents interactions between low-level kernel objects such as processes, files, and sockets. I will describe CamFlow, an OS infrastructure that captures such provenance graphs with negligible performance overhead. I will then describe a host intrusion detection system (IDS), called Unicorn, that uses provenance graphs to detect particularly dangerous attacks called advanced persistent threats (APTs). APTs are the main cause of many of today’s large-scale data breaches. Unicorn applies machine learning to provenance graphs to identify system anomalies caused by APTs in real time without a priori attack knowledge.

I will close the talk by discussing challenges and opportunities in provenance-based intrusion detection, including efforts to develop a robust IDS that not only provides timely anomaly detection, but also explains the manner in which an attack unfolds.

About the Speaker: Xueyuan (Michael) Han is a computer science doctoral candidate advised by Professor James Mickens at Harvard University and Professor Margo Seltzer at the University of British Columbia. His research interests lie at the intersection of systems, security, and privacy. His work focuses on combining practical system design and machine learning to detect host intrusions, and designing language-level frameworks that respect user directives for handling private data. He has previously spent time at the University of Cambridge, Microsoft Research, and NEC Labs America. He is a Siebel Scholar and holds a B.S. in computer science from UCLA.

March 7

Automatic Program Repair and Inconsistency Detection

Thibaud Lutellier |University of Waterloo

Abstract: From bug detection to bug repair, software reliability is involved in all parts of the development cycle. Automation is desirable as it can reduce developers' time on these tasks and discover issues that would be hard to find manually. This job talk will present our recent advancements in automatic program repair and inconsistency detection. In the first part of the talk, I will introduce a new automatic program repair technique that uses ensemble learning and a new neural machine translation (NMT) architecture to automatically fix bugs in multiple programming languages. We then extend this work by introducing a pre-trains programming language model and a new code-aware search strategy. This extended approach outperforms all existing automatic program repair techniques on popular benchmarks. In the second part of the talk, I will explain how we propose a new automated inconsistency detection technique to find bugs in PDF readers and files and how we extended it to find bugs in another domain.

About the Speaker: Thibaud Lutellier is a PostDoc Fellow in the Electrical and Computer Engineering Department at the University of Waterloo. His research interests lie at the crossroad between software engineering and artificial intelligence. His recent work includes proposing new AI-driven program repair techniques and new solutions for detecting bugs in deep learning libraries. He got an ACM SIGSOFT Distinguished Paper Award at ASE'20 for his work on analysing variance in DL training.

April 4

More Benign Point-Based Additive Manufacturing Through Process Planning 

Lee Clemon |University of Technology Sydney

This talk will be held on Monday, April 4th, at 3:00 p.m. in Boggs 239. Please note the special time, and venue for this event.

Abstract: Additive manufacturing is a rapidly growing consumer and commercial fabrication industry that may invert the design and manufacturing paradigm for many products. However, this suite of technologies is currently limited by slow cycle times, and a direct trade-off between throughput and precision. Current deposition methods rely on simple algorithms and sequential fabrication of each layer. Improvements in deposition planning can accelerate throughput and reduce resource use. We establish new approaches that leverage the structure of the intended model and relax unnecessary fabrication constraints to circumvent current speed limitations and maximize value adding operations. These efforts explore multiple algorithms to construct an improved toolpath. In addition, material use and energy consumption in additive manufacturing pose a challenge in production scale-up, particularly when considering climate change and waste generation. We characterize the resource intensity in current machines and by typical users to enable designers to make more informed decisions and identify opportunities for waste reduction. With these advances in deposition planning and enabled by multi-material fabrication we propose, new opportunities for creating circular economies leveraging additive manufacturing to give new live to waste materials. We then evaluate the structural implications of material sequestration to enable a redesign of products and product lifecycles for these circular economies. ----

About the Speaker: Dr. Lee Clemon, P.E. is a research scientist in advanced manufacturing and high consequence design and licensed professional engineer. He focuses on the interplay of materials, design, and manufacturing for a more reliable and environmentally conscious industrial world. His current research interests are in process improvement and material property manipulation in advanced manufacturing processes, with an emphasis on additive and hybrid additive-subtractive manufacturing through particulate, wire, layer, and ensemble fabrication methods. He is a management member of the Centre for Advanced Manufacturing, program co-lead of the ARC Training Centre for Collaborative Robotics in Advanced Manufacturing, and member of the RF and Communications Laboratory. Lee also serves the mechanical engineering profession as an active volunteer for ASME providing professional development and training. Lee M Clemon holds a Ph.D. and a M.S. in Mechanical Engineering from the University of California at Berkeley, and a B.S. in Mechanical Engineering from the University of Kansas. He was previously a staff member at Sandia National Laboratories as a design and Research and Development engineer on hazardous substance processing systems and manufacturing process development. Lee became a Lecturer at the University of Technology Sydney, in the School of Mechanical and Mechatronic Engineering.

May 2

A Life in Logic: Colloquium in Honor of Michael Mislove 

This event will be held in honor of Prof. Michael Mislove, who is retiring from Tulane University after over 50 years of service. Please join us in celebrating his work and his time at Tulane.

This event will be held on Monday, May 2nd, from 4:00 p.m. - 6:30 p.m. (CDT) in Boggs 600. Please note the special venue for this event.  

4:00 p.m - Welcome and short intro, Carola Wenk (Computer Science) and Morris Kalka (Math)

4:10 p.m. - 4:40 p.m.: Mike Mislove and Tulane, Some Reminiscences

Jimmie Lawson |Louisiana State University

Abstract: Mike Mislove's professional career at Tulane has intersected significantly with mine over the years, and I would like to share some (by no means complete) reminiscences, recollections, and reflections from those years of his research, leadership, and service, with some important Tulane personalities forming a backdrop.

4:40 p.m. - 5:10 p.m.: Additional Talks

Peter Bierhost |University of New Orleans

Ellis Fenske |United States Naval Academy

5:10 p.m.: Video

Reception with drinks and snacks to follow.

The event will also be available over Zoom for those that cannot attend in person. As referenced above, Zoom details will be provided via the announcement listserv, or you may email to request the corresponding link.

May 11

Title: Security Analysis of Binary Code via Architectural and Compiler Support  

Jiang Ming |University of Texas at Arlington

This talk will be held on Wednesday, May 11, at 4:00 p.m. in Stanley Thomas 302. Please note the special weekday and venue for this event.

Abstract: Software security has become a hugely important consideration in all aspects of our lives. As software vulnerabilities and malware occupy a large portion of cyberattacks, automated security analysis of binary code is booming over the past few years. Binary code is everywhere, from IoT device firmware to malicious programs. However, binary code analysis is highly challenging due to binary code's very low-level and complicated nature. My research explores cross-disciplinary methodologies to effectively address the security problems in binary code. In this talk, I will first present my recent work on the security hardening of embedded systems (ASPLOS '22). We take advantage of architectural support to safely eliminate unused code of shared libraries on embedded systems. Our work can significantly reduce the code-reuse attacking surface with zero runtime overhead. Then, I will look into the most common source leading to binary code differences: compiler optimization. Our PLDI '21 awarded paper takes the first step to systematically studying the effectiveness of compiler optimization on binary code differences. We provide an important new viewpoint on the established binary diffing research area and challenge long-held optimization-resistance claims with compelling evidence.

About the Speaker: Jiang Ming is an Assistant Professor in the Department of Computer Science and Engineering at the University of Texas at Arlington. He received his Ph.D. from Pennsylvania State University in 2016. His research interests span Software and Systems Security, with a focus on binary code analysis, hardware-assisted software security analysis, mobile systems security, and language-based security. His work has been published in prestigious conferences, including IEEE S&P, ASPLOS, PLDI, USENIX Security, ACM CCS, NDSS, ICSE, FSE, ASE, and MobiSys. Jiang has been funded by multiple NSF grants, Cisco research award, and UT System Rising STARs program. He was the recipient of UTA College of Engineering Outstanding Early Career Research Award, ACM SIGPLAN Distinguished Paper Award, and ACM SIGSOFT Distinguished Paper Nomination.

May 26

Title: Puzzles in Life, Work, and Computer Science  

Matthew Toups |University of New Orleans

This talk will be held on Thursday, May 26, at 4:00 p.m. in Stanley Thomas 302. Please note the special weekday and venue for this event.

Abstract: Jigsaw puzzles have been produced for centuries, and printed puzzles have been a daily part of newspapers since the early 20th century. Puzzles can provide lightweight recreation, can serve as a distraction during the recent pandemic, or can even be the subject of rigorous computational complexity analysis. Puzzles and puzzle-solving also inform how I approach teaching Computer Science at the undergraduate level. Not only are puzzles a way to add both fun and challenge to courses, but more broadly I have several observations on CS pedagogy which I use puzzles to illustrate. Puzzles not only teach problem-solving, but they can motivate both through competition and co-operation, and can scale up and down in difficulty as needed. We can also step back from small puzzle pieces to examine the larger picture of what we want students to synthesize. I will also figuratively put together some puzzle pieces from my time as a student as a way to introduce my perspective on our discipline.

About the Speaker: Matthew Toups, born and raised in New Orleans, holds a B.S. in Computer Science from Carnegie Mellon University and an M.S. from the University of New Orleans. Since 2016 he has served as I.T. Director for the University of New Orleans' Computer Science Department, providing a wide range of research and teaching technology needs. Additionally he has taught numerous undergraduate systems courses at UNO, and he sponsors a student cybersecurity competition team. He also enjoys solving puzzles.

Fall 2021 Colloquia

Oct 4

Complex Proteoform Identification by Top-down Mass Spectrometry

Xiaowen Liu | Biomedical Informatics and Genomics Division, Tulane University

Abstract: Mass spectrometry-based proteomics has been rapidly developed in the past decade, but researchers are still in the early stage of exploring the world of complex proteoforms, which are protein products with various primary structure alterations resulting from gene mutations, alternative splicing, post-translational modifications, and other biological processes. Proteoform identification is essential to mapping proteoforms to their functions and discovering novel proteoforms and new protein functions. Top-down mass spectrometry is the method of choice for identifying complex proteoforms because it provides a “bird’s eye view” of intact proteoforms. The combinatorial explosion of various alterations on a protein may result in billions of possible proteoforms, making proteoform identification a challenging computational problem. We propose to use mass graphs to efficiently represent proteoforms and design mass graph alignment algorithms for proteoform identification by top-down mass spectrometry. Experiments on top-down mass spectrometry data sets show that the proposed methods are capable of identifying complex proteoforms with various alterations.

About the Speaker: Dr. Xiaowen Liu is a professor of bioinformatics in the Division of Biomedical Informatics and Genomics, Tulane University School of Medicine. He received his Ph.D. degree in computer science from the City University of Hong Kong in 2008. After 4-year postdoc training at the University of Western Ontario, the University of Waterloo, and the University of California, San Diego, Dr. Liu took positions as an Assistant Professor and Associate Professor at the Department of BioHealth Informatics, Indiana University-Purdue University Indianapolis from 2012 to 2021. Recently, he joined Tulane University School of Medicine. His research focuses on developing computational methods for analyzing mass spectrometry data, especially top-down mass spectrometry data. His lab developed TopPIC suite, a widely used software package for proteoform identification by top-down mass spectrometry.

Nov 8

Interdisciplinary Project Presentations

Tianyi Xu, Taotao Jing,  and Haifeng Xia | Computer Science PhD Students, Tulane University

Tianyi Xu

Contextual Bandits With Probing

Abstract: In various practical applications from clinical trials to recommendation systems and anomaly detection, the problem of sequential decision-making is often encountered. Usually, each action (for example, the user's profile) has related information or context, and only the reward for the selected action is revealed (i.e., the bandit feedback). Since the reward is often a random variable, a statistical approach (contextual bandits) can be applied to solving these problems. Different from choosing one arm each time, we consider a novel extension of the bandit learning framework to incorporate joint probing and play. We assume that before the decision maker chooses an arm to play in each round, it can probe a subset of arms and observe their rewards (in that round). The decision maker then picks an arm to play according to the observations obtained in the probing stage and historical data. Our bandit learning model and its extensions can potentially be applied to a large body of sequential decision-making problems that involve joint probing and play under uncertainty. We will present an efficient algorithm for these problems and establish the regret bound using tools from online learning and statistics.

Taotao Jing

Augmented Multi-Modality Fusion for Generalized Zero-Shot Sketch-based Visual Retrieval

Abstract: Augmented Multi-Modality Fusion for Generalized Zero-Shot Sketch-based Visual Retrieval Abstract: Zero-shot sketch-based image retrieval (ZS-SBIR) has attracted great attention recently, due to the potential application of sketch-based retrieval under the zero-shot scenario, where the categories of query sketches and gallery photo pool are not observed in the training stage. However, it is still under insufficient exploration for the general and practical scenario when the query sketches and gallery photos contain both seen and unseen categories. Such problem is defined as generalized zero-shot sketch-based image retrieval (GZS-SBIR), which is also the focus of this work. To this end, we propose a novel Augmented Multi-modality Fusion (AMF) framework to generalize seen concepts to unobserved one efficiently. Specifically, a novel knowledge discovery module named cross-domain augmentation is designed in both visual and semantic space to mimic novel knowledge unseen from the training stage, which is the key to handle GZS-SBIR challenge. Moreover, a triplet domain alignment module is proposed to couple the cross-domain distribution across photo and sketch in visual space. To enhance the robustness of our model, we explore embedding propagation to refine both visual and semantic features by removing undesired noise. Eventually, visual-semantic fusion representations are concatenated for further domain discrimination and task-specific recognition, which tends to trigger the cross-domain alignment in both visual and semantic feature space.In addition to popular ZS-SBIR benchmarks, a new evaluation protocol specifically designed for GZS-SBIR problem is constructed from DomainNet dataset with more diverse sub-domains, and the promising results demonstrate the superiority of the proposed solution over other baselines.  

Haifeng Xia

Privacy Protected Multi-Domain Collaborative Learning

Abstract: Unsupervised domain adaptation (UDA) aims to transfer knowledge from one or more well-labeled source domains to improve model performance on the different-yet-related target domain without any annotations. However, existing UDA algorithms fail to bring any benefits to source domains and neglect privacy protection during data sharing. With these considerations, we define Privacy Protected Multi-Domain Collaborative Learning (P2MDCL) and propose a novel Mask-Driven Federated Network (MDFNet) to reach a “win-win” deal for multiple domains with data protected. First, each domain is armed with individual local model via a mask disentangled mechanism to learn domain-invariant semantics. Second, the centralized server refines the global invariant model by integrating and exchanging local knowledge across all domains. Moreover, adaptive self-supervised optimization is deployed to learn discriminative features for unlabeled domains. Finally, theoretical studies and experimental results illustrate rationality and effectiveness of our method on solving P2MDCL. 

Nov 29

Hard Instances From Generalized Error Correcting Codes

Victor Bankston | Department of Computer Science, Tulane University

Abstract: Understanding the tractability of certain instances of NP-hard problems is an important issue in modern complexity theory. For example, when restricted to perfect graphs, Independent Set, $\alpha(G)$ can be solved in polynomial time by computing a semidefinite relaxation, Lovasz's theta function $\vartheta(G)$. To better understand the intractability of Independent Set, we seek families of instances of graphs for which $\vartheta(G)$ and $\alpha(G)$ differ to the largest possible extent. We propose a method for constructing such instances based on the fact that $\vartheta(G) > \alpha(G)$ may be viewed as a Bell Inequality. We present a toy model where measurements may be viewed as codes in $\mathbb{F}_5^n$. We then study the Bell Inequalities using the techniques for analyzing error correcting codes. We then study the more natural Pauli measurements and observe that these also have the structure of a generalized error correcting code, an association scheme. Thus, we argue that we can construct hard instances of Independent Set by constructing 2-designs of Pauli measurements.

Dec 6

Automated Deep Learning for Open & Inclusive AI

Jun (Luke) Huan | StylingAI Inc.

Abstract: Big Data, AI, and cloud computing are transforming our society. In areas such as game playing, image classification, and speech recognition, AI algorithms may have already surpassed human experts’capability. We are observing transformations AI and data science produce to industry sectors such as e-commerce, social-networking, finance, health, and transportation among others. My talk covers two parts: (1) a brief introduction to the Baidu AutoDL project where we use deep learning to design deep learning networks and (2) the applications of automated deep learning in different vertical areas including content generation and digital human.

About the Speaker: Dr. Jun (Luke) Huan is the founder, president, and chief scientist of StylingAI Inc., a start-up aiming to developing and applying AI capabilities for automated content generation. Before that he served as a distinguished scientist and the head of Baidu Big Data Laboratory at Baidu Research. Before joining industry, he was the Charles E. and Mary Jane Spahr Professor in the EECS Department at the University of Kansas. From 2015-2018, Dr. Huan worked as a program director at the US NSF in charge of its big data program.

Dr. Huan works on Data Science, AI, Machine Learning and Data Mining. Dr. Huan's research is recognized internationally. He has published more than 150 peer-reviewed papers in leading conferences and journals and has graduated eleven Ph.D. students. He was a recipient of the NSF Faculty Early Career Development Award in 2009. His group won several best paper awards from leading international conferences. Dr. Huan service record includes Program Co-Chair of IEEE BIBM in 2015 and IEEE Big Data 2019.

Dec 8

Special Joint Talk in Both the Algebraic Geometry and Geometric Topology Seminar of the Department of Mathematics and the Department of Computer Science Colloquium

Erin Wolf Chambers| Saint Louis University

This talk will be held on Wednesday, December 8th, at 3:00 p.m. in Gibson 310. Please note the special weekday, time, and venue for this event.

Reeb Graph Metrics From the Ground Up

Abstract: The Reeb graph has been utilized in various applications including the analysis of scalar fields. Recently, research has been focused on using topological signatures such as the Reeb graph to compare multiple scalar fields by defining distance metrics on the topological signatures themselves. In this talk, we will introduce and study five existing metrics that have been defined on Reeb graphs: the bottleneck distance, the interleaving distance, functional distortion distance, the Reeb graph edit distance, and the universal edit distance. This talk covers material from a recent survey paper, which has multiple contributions: (1) provide definitions and concrete examples of these distances in order to develop the intuition of the reader, (2) visit previously proven results of stability, universality, and discriminativity, (3) identify and complete any remaining properties which have only been proven (or disproven) for a subset of these metrics, (4) expand the taxonomy of the bottleneck distance to better distinguish between variations which have been commonly miscited, and (5) reconcile the various definitions and requirements on the underlying spaces for these metrics to be defined and properties to be proven.

About the Speaker: Dr. Erin Wolf Chambers is a Professor at Saint Louis University in the Department of Computer Science, with a secondary appointment in the Department of Mathematics. Her research focus is on computational topology and geometry, with a more general interest in combinatorics and algorithms. Complementing this work, she is also active in projects to support and improve the culture and climate in computer science and mathematics at all levels. She currently serves as editor for several journals, on the board of trustees for the Society for Computational Geometry, and on the SafeToC organizing committee and as an advocate. She received her PhD in Computer Science from the University of Illinois at Urbana-Champaign in 2008, and was a Visiting Research Professor at Saarland University in Summer 2011.

Dec 13

Architectural Support for Interdisciplinary Data Science Research

Lu Peng | Louisiana State University

Abstract: The ever-increasing amount of global data introduces big challenges to computer systems in the aspect of performance, power consumption, reliability, and security. Deep learning and other advanced algorithms have been proposed to handle the problems in the layers of software, however, they require significant computing resources and memory bandwidth. In consequence, traditional CPU-based platforms are no longer the best choices for deploying these algorithms because they do not provide sufficient parallelism. Graphics Processing Units (GPUs) can provide improved performance but at the cost of higher power consumption. FPGAs and ASICs have garnered attention due to their application-specific nature, ability to achieve high degrees of parallelism, and high energy efficiency. In this talk, I will introduce our recent work in the computer system and architectural support for interdisciplinary data science research including hardware accelerator for deep neural networks, accelerator design for smart contracts processing, and an application of blockchain in contact tracing against COVID-19. Other recent work including adapting the B+ tree on Graphics Processing Units (GPUs) and improving resilience for Big Data kernels will be briefly introduced.

About the Speaker: Lu Peng is the Gerard L. “Jerry” Rispone professor with the Division of Electrical and Computer Engineering, Louisiana State University, Baton Rouge, Louisiana. His research interests include computer systems and architecture focusing on many design issues on CPUs and GPUs, hardware accelerators, and applications for deep learning neural networks and blockchains. As PI or Co-PI, he has led or co-led several interdisciplinary research projects with collaborating researchers from different fields: Computer Science, Electrical Engineering, Statistics, Chemistry, Pathobiological Sciences, and Meteorology. His work has been supported by multiple federal and state agencies including NSF, NIH, NRL, DOE/LLNL, ORAU, NASA/LaSpace, BoR, and LSU RoC, as well as industrial companies including Chevron and Xilinx. He was a recipient of the ORAU Ralph E. Power junior faculty enhancement awards in 2007 and the Best Paper Award from IEEE IGSC in 2019 and IEEE ICCD in 2001.

Summer 2021 Colloquia

Check back soon for more information on the computer science seminar series. If you would like to receive notices about upcoming seminars, you can subscribe to the announcement listserv. Unless otherwise noted, the seminars meet on Mondays at 4pm in Stanley Thomas 302.

July 29

Dissertation Defense Talk

Majid Mirzanezhad | Computer Science PhD Student, Tulane University

Please join us for Majid Mirzanezhad’s PhD dissertation defense as described below. This is our first in-person colloquium in a long time! While there is an option to join remotely, we sincerely hope you will be able to join in person. There will be a reception with snacks afterwards.

This talk will be held on Thursday, July 29th, at 2:00 p.m. CST in Stanley Thomas, Room # 302. Please note the special weekday and time for this event. This presentation will also be delivered online at

Abstract: The rapid growth of the need for using Geographic Information Systems (GIS), for a better understanding of the environment, has led many researchers and practitioners of various disciplines to design efficient algorithmic methods for confronting the real-world problems arising in the realm of intelligent transportation systems, urban planning, mobility, surveillance systems, and other disciplines over the past few decades. In this dissertation, we consider several topics in computational geometry that involve applications in maps and networks in GIS. We first propose several algorithms that capture the similarity between linear features, notably curves, whose edges are relatively long. One of the popular metrics to capture the similarity between curves is the Fréchet distance. We give a linear-time greedy algorithm deciding and approximating the Fréchet distance and a near linear-time algorithm computing the exact Fréchet distance between two curves in any constant dimension. We also propose several efficient data structures for the approximate nearest-neighbor problem and distance oracle queries among curves under the Fréchet distance.

We exploit the metric studied above for simplification purposes. We specifically consider the problem of simplifying a feature, e.g., graph/tree/curve with an alternative feature of minimum-complexity such that the distance between the input and simplified features remains at most some threshold. We propose several algorithmic and NP-hardness results based on the distance measure we use and the vertex placement of the simplified feature that can be selected from the input's vertices, or its edges, or any points in the ambient space.

About the Speaker: Majid Mirzanezhad is a Ph.D. candidate in the Department of Computer Science at Tulane University. His research area is on computational geometry with applications in GIS and primarily focused on approximation algorithms and data structures for curves and graphs. Prior to pursuing his Ph.D., Majid received his MSc and BSc in computer science from Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran.

Spring 2021 Colloquia

Check back soon for more information on the computer science seminar series. If you would like to receive notices about upcoming seminars, you can subscribe to the announcement listserv. Unless otherwise noted, the seminars meet on Mondays at 4pm in Stanley Thomas 302. However, due to the current pandemic, all colloquia are being conducted virtually.

Apr 26

Interdisciplinary Project Presentations

Karthik Shivaram and Xintian Li | Computer Science PhD Students, Tulane University

These presentations will be delivered online. You may access the presentations on Monday, April 26th, at 4:00 pm CST via the following link: . Meeting ID: 947 7760 4484. Passcode: 511369. Please be sure to mute your microphone when you log on.

Karthik Shivaram

Combating Partisan Homogenization in Content-Based News Recommendation Systems

Abstract: Content-based news recommendation systems build user profiles to identify important terms and phrases that correlate with the user’s engagement to make accurate recommendations. Prior work by Ping et al. [1] suggests that these recommendation systems tend to have a homogenization effect when a user’s political views are diverse over a set of topics. In this work we propose a novel attention-based neural network architecture in a multitask learning setting to overcome this problem of partisan homogenization.

Xintian Li

Evacuation Diffusion Modeling from Twitter

Abstract: Evacuations have a significant impact on saving human lives during hurricanes. However, as a complex dynamic process, it is typically difficult to know the individual evacuation decisions in real time. Since a large amount of information is continuously posted through social media platforms from all populations, we can use them to predict individual evacuation behavior. In this project, we collect tweets during Hurricane Irma 2017, and train a text classifier in an active-learning way to identify tweets indicating positive evacuation decisions from both negative and irrelevant ones. We predict the demographic information for each identified evacuee, based on which we use time series modeling to predict evacuation rate changes over time. We also use the demographic information to help predict possible evacuees in different time ranges. The results can be used to help inform planning strategies of emergency response agencies.  

Fall 2020 Colloquia

Oct 19

Interdisciplinary Project Presentations

Demi Qin and Akshay Mehra | Computer Science PhD Students, Tulane University

These presentations will be delivered online. You may access the presentations on Monday, October 19th, at 4:00 pm CST via the following link: . Meeting ID: 956 8563 7204. Please be sure to mute your microphone when you log on.

Demi Qin

Fast Prostate Cancer Diagnosis using Topological Data Analysis

Abstract: Topological data analysis (TDA) is attracting increasing interest among researchers in machine learning due to the power of capturing shapes and structure in data. In this talk, we particularly consider biopsy image classification of prostate cancer with TDA that can utilize the topological summaries of images in machine learning tasks. We begin with the theoretical background of TDA and show our previous work on prostate cancer diagnosis by applying TDA in machine learning applications. Next, we define two aspects to improve the use of TDA: 1) A parallel computation pipeline of our previous work; 2) Comparing distance metric on topological summaries. Our results give new insights on when topological summaries could be more suitable and can be used to design better feature-based learning models with TDA.

Akshay Mehra

Penalty Method for Inversion-Free Deep Bilevel Optimization

Abstract: Bilevel optimization problems are at the center of several important machine learning problems such as hyperparameter tuning, learning with noisy labels, meta-learning, and adversarial attacks. In this presentation, I will talk about our algorithm for solving bilevel problems using the penalty method and discuss its convergence guarantees and show that it has linear time and constant space complexities. Small space and time complexities of our algorithm make it an effective solver for large-scale bilevel problems involving deep neural networks. I will present results of the proposed algorithm on data denoising, few-shot learning, and data poisoning problems in a large-scale setting and show that it outperforms or is comparable to previously proposed algorithms based on automatic differentiation and approximate inversion in terms of accuracy, run-time and convergence speed. 

Nov 9

Interdisciplinary Project Presentations

Erfan Hosseini, Pan Fang, and Henger Li | Computer Science PhD Students, Tulane University

These presentations will be delivered online. You may access the presentations on Monday, November 9th, at 4:30 pm CST via the following link: . Meeting ID: 936 6227 1094. Please be sure to mute your microphone when you log on.

Erfan Hosseini

The Study of Gentrification on Social Urban Simulation - How Income and Interest Can Shape Neighborhoods

Abstract: Gentrification is well-known among sociologists for its complexity and vast effects on urban life. In fact, gentrification can be so complex that sociologists only study specific instances of it. The study of gentrification is important since changes in gentrified urban areas directly affect surrounding suburban and rural areas hence a huge population is involved. In this project, we aim to simulate an urban environment and observe how gentrification starts and how it can affect the city in different situations. Furthermore, we experiment with the factors of gentrification to find possible bottlenecks and try to prevent it. This study can help us understand gentrification better and manage the city in a proper manner while facing it.

Pan Fang

Distance Measures for Embedded Graphs

Abstract: Measuring similarity of two objects is an essential step in many applications, particularly in comparing objects that can be modeled as graphs. In this project, we explore and learn the existing distance measures for planar graphs. When comparing their performance regarding different factors such as computability, quality of similarity and robustness, none of them have desired result in all these aspects. Besides these computational parts, we also take account of theoretic parts in mathematics for comparison. Specifically, if a distance measure of planar graphs is a metric, we investigate some topological properties of the metric space (e.g., connectedness, completeness and compactness). We comprehensively summarize the existing work in this area and analyze the strengths and weaknesses of these methods. This project will present a critical assessment and concise review of this field that is directly accessible to most people. 

Henger Li

Learning to Pool: Multi-Arm Bandit for COVID-19 Group Testing

Abstract: The worldwide pandemic coronavirus (COVID-19) has grown exponentially and caused huge life and economic loss. Due to its highly contagious nature, it is vital to have a large scale and rapid testing to screen for the virus's presence to control its spread. The recent RT-PCR based group testing or pooled testing seems like an effective method to vastly reduce the number of tests. However, the current group testing suffers from the dilution in pooled samples, which makes it harder to detect early-stage infection with low viral load. We propose a multi-arm bandit framework to balance the trade-off between the number of tests and false-negative rate through dynamically decide the group size and which group to test according to the historical test result during the group testing. 

Nov 25

Thesis Defense Talk

Sushovan Majhi | Mathematics PhD Student, Tulane University

This presentation will be delivered online. You may access the presentation on Tuesday, November 25th, at 10:00 am CST via the following link: .

Topological Methods in Shape Reconstruction and Comparison

Abstract: Most of the modern technologies at our service rely on "shapes" in some way or the other. Be it the Google Maps showing you the fastest route to your destination or the 3D printer on your desk creating an exact replica of a relic---shapes are being repeatedly sampled, reconstructed, and compared by intelligent machines. With the advent of modern sampling technologies, shape reconstruction and comparison techniques have matured profoundly over the last two decades. In this defense talk, we will catch a glimpse of the provable topological methods we propose to advance the study of Euclidean shape reconstruction and comparison. We investigate how topological concepts and results---like the Vietoris-Rips and Cech complexes, Nerve Lemma, discrete Morse theory, etc---lend themselves well to the reconstruction of geodesic spaces from a noisy sample. Our study also delves into the approximation of Gromov-Hausdorff distance, which is deemed as a robust shape comparison framework. We address some of the pivotal questions and challenges pertaining to its efficient computation---particularly for Euclidean subsets. Finally, we present an approximation algorithm, with a tight approximation factor of (1+1/4), for the Gromov-Hausdorff distance on the real line. . 

Spring 2020 Colloquia

Feb 10

Interactive Visual Analysis at Scale: From Data to Actionable Insights

Fabio Miranda | New York University

Abstract: Over the past decade, technological innovations have produced a wealth of large and complex data sets on almost every aspect of human life, from natural science to business and social science. The analysis of this data is usually an exploratory process in which domain expertise plays an important role. It is, therefore, essential to integrate the user into the analysis loop, enabling them to formulate hypotheses and gain actionable insights into domain-specific problems. Interactive visualization is central in the support of this process, but the scale and complexity of the data present several challenges. My research focuses on proposing new methods and systems that allow for the interactive visual analysis of large data of different types, such as time-series, spatio-temporal, geometry, and image data. By combining visualization, machine learning, data management, and computer graphics, my work tackles fundamental challenges in data science, enabling effective analysis of large data to untangle real-world problems. In this talk, I will present my most recent contributions in the interactive visual analysis and exploration of large urban data, motivated by problems such as urban noise, neighborhood characterization, accessibility, and shadow impact on public spaces. The techniques and tools have been used by different domain experts, including urban planners, architects, occupational therapists, and acoustics researchers, allowing them to engage in data-driven science to better understand cities.

About the Speaker: Fabio Miranda is a postdoctoral research associate at the Visualization and Data Analytics Center (VIDA) and the Center for Urban Science and Progress (CUSP) at New York University. He received his PhD from New York University in 2018, advised by Professor Claudio T. Silva. During his PhD studies, he completed internships at Argonne National Laboratory, IBM Research, AT&T Labs Research, and Sandia National Labs. His research proposes new techniques that allow for the interactive visual analysis of large-scale data. He has worked closely with domain experts from different fields, from urban planning to occupational therapy, and the outcome of these collaborations includes not only research published in leading visualization, database, HCI, and AI venues, but also systems that were made available to experts in academia, industry, and government agencies. His work has also received extensive coverage from different media outlets, including The New York Times, The Economist, Architectural Digest, Curbed, among others.

Feb 12

Trajectories - My Journey So Far

Dieter Pfoser | George Mason University

This talk will be held on Wednesday, February 12th, at 4:00 p.m. in Stanley Thomas 302. Please note the special weekday for this event.

Abstract: Trajectory datasets have sparked a lot of research ranging from physical database work and graph algorithms to becoming a valuable resource in urban science. We will initially discuss fundamental algorithmic efforts related to spatiotemporal database queries and using trajectories to calculate travel times for road networks and their effect on shortest path calculations. While interesting as a data type, trajectories are also an important resource when it comes to understanding human mobility and what goes on in a city. Here, we will look at how people’s itineraries can be used to compile a trajectory-driven travel guide and how we identify underlying transportation networks and respective travel choices. The talk concludes with current challenges and research directions.

About the Speaker: Dr. Dieter Pfoser is a professor and chair of the Dept. of Geography and GeoInformation Science at George Mason University. He received his PhD in computer science from Aalborg University, Denmark in 2000. At GMU he teaches courses related to geospatial data management, Linked Data, Web application development using open-source software, and data visualization. His research interests include data management, data mining for spatial and spatiotemporal data, graph algorithms for dynamic networks, and user-generated content, e.g., map-matching and map construction algorithms. Over the years, Dr. Pfoser’s research has been supported by grants from NSF, DARPA, NGA, DHS, and the European Commission. More information on his work can be found at

Feb 13

Optimizing Distributed Machine Learning with Reinforcement Learning

Hao Wang | University of Toronto

This talk will be held on Thursday, February 13th, at 2:00 p.m. in Norman Mayer, Room # 200A. Please note the special weekday, time, and venue for this event.

Abstract: In the era of Internet of Things, mobile computing and Big Data, millions of sensors and mobile devices are constantly generating massive volumes of data. To utilize the vast amount of data without violating data privacy, Federated Learning has emerged as a new paradigm of distributed machine learning that orchestrates model training across mobile devices. In this talk, I will first introduce the current challenges in distributed machine learning, and then present my recent work on statistical heterogeneity in federated learning, and distributed machine learning on a serverless architecture. Specifically, I will talk about applying reinforcement learning to optimize distributed machine learning by learning the best choice for task scheduling and resource provisioning.

About the Speaker: Hao Wang is a 5th year Ph.D. candidate at the University of Toronto under the supervision of Professor Baochun Li. Hao received both of his B.E. degree in Information Security and M.E. degree in Software Engineering from Shanghai Jiao Tong University in 2012 and 2015, respectively. His research interests include large-scale data analytics, distributed machine learning, and datacenter networking. He has published 16 papers (including eight first-author papers) in prestigious networking and system conferences and journals, such as INFOCOM, SoCC, TPDS, and ToN. For more information about Hao, please visit

Feb 17

Enabling Multidisciplinary Science With Social Media Using Natural Language Processing and Network Analysis

Aron Culotta | Illinois Institute of Technology

Abstract: Online social networks present exciting opportunities for conducting multidisciplinary research in human sciences such as public health, sociology, political science, and marketing. However, properly analyzing this noisy, non-representative data requires advances in natural language processing, network analysis, machine learning, and causal inference. In this talk, I will summarize our recent progress in three fundamental areas that support this growing field: (1) controlling for confounds in text classification; (2) using distant supervision to train machine learning models; (3) using causal inference to understand how language affects perception. Applications discussed will include smoking cessation, deceptive marketing, political sentiment, and disaster management.

About the Speaker: Aron Culotta is an Associate Professor of Computer Science at the Illinois Institute of Technology in Chicago, where he leads the Text Analysis in the Public Interest Lab and co-directs the bachelor's and master's programs in artificial intelligence. He obtained his Ph.D. in Computer Science from the University of Massachusetts, Amherst in 2008, where he developed machine learning algorithms for natural language processing. He was a Microsoft Live Labs Fellow, completed research internships at IBM, Google, and Microsoft Research, received paper awards at AAAI and CSCW, and is a Program co-Chair for ICWSM-2020. His research is supported by several NSF-funded collaborations with researchers in public health, political science, marketing, and emergency management.

Mar 2

Making Decisions From Human Knowledge and Preferences

Rupert Freeman | Microsoft Research

Abstract: Even in the age of big data and machine learning, human knowledge and preferences still play a large part in decision making. For some tasks, such as predicting complex events like recessions or global conflicts, human input remains a crucial component, either in a standalone capacity or as a complement to algorithms and statistical models. In other cases, a decision maker is tasked with utilizing human preferences to, for example, make a popular decision over an unpopular one. However, while often useful, eliciting data from humans poses significant challenges. First, humans are strategic, and may misrepresent their private information if doing so can benefit them. Second, when decisions affect humans, we often want outcomes to be fair, not systematically favoring one individual or group over another.

In this talk, I discuss two settings that exemplify these considerations. First, I consider the participatory budgeting problem in which a shared budget must be divided among competing public projects. Building on classic literature in economics, I present a class of truthful mechanisms and exhibit a tradeoff between fairness and economic efficiency within this class. Second, I examine the classic online learning problem of learning with expert advice in a setting where experts are strategic and act to maximize their influence on the learner. I present algorithms that incentivize truthful reporting from experts while achieving optimal regret bounds.

About the Speaker: Rupert Freeman is a postdoc at Microsoft Research New York City. Previously, he received his Ph.D. from Duke University under the supervision of Vincent Conitzer. His research focuses on the intersection of artificial intelligence and economics, particularly in topics such as resource allocation, voting, and information elicitation. He is the recipient of a Facebook Ph.D. Fellowship and a Duke Computer Science outstanding dissertation award.

Mar 4

Towards an Efficient, Scalable, Dynamic and Flexible NFV-based Data Plane

Wei Zhang | Microsoft Azure Cloud

This talk will be held on Wednesday, March 4th, at 4:00 p.m. in Stanley Thomas 302. Please note the special weekday for this event.

Abstract: Network function virtualization (NFV) allows traditional hardware middleboxes such as routers and firewalls to run as software on commodity servers, which enables complex, flexible and customized network services. Unfortunately software is typically slower than hardware. In addition, different network flows may need to be steered into different sequences of network functions (called service chains) with customized needs. Furthermore, as network functions become more complex, we need to bring together advances in OS and networking to build efficient and scalable network functions.

In this talk, I will describe an efficient and flexible data plane platform - openNetVM with zero extra packet copy, which can process packets at line rate. Next, I will show a management controller in data plane, Flurries which can provide flow-level performance management and customized service chain. Then, I will describe a software-based load balancer as an example to show how to build efficient and scalable network functions on top of NFV platform. Finally, I will conclude the talk with a brief discussion of my future research plan.

About the Speaker: Wei Zhang is currently a R&D engineer at Microsoft. In August 2018, she finished her Ph.D degree in the Department of Computer Science at the George Washington University. Her projects cover various aspects of computer network and system issues aiming at building efficient, scalable, and flexible network platforms for software-based network functions and developing efficient resource management and scheduling solutions for cloud systems. Her research interests include cloud computing, operating system, distributed system and network system, software & hardware co-design, and resource disaggregation. Her research has appeared in a number of premier conferences and workshops in networking and cloud systems, including CoNext, SIGCOMM, Middleware, SOSR and HotCloud. She received a Best Paper Award from ICAC 2016, Cisco scholarship 2016, second prize in Geni networking competition 2015, Beihang outstanding graduate student and national scholarship in 2014. She has taken research internships in HP labs, AT&T labs research, Futurewei, VMware, Baidu, and Oracle in system, networking and cloud groups.

Mar 9

Robust Multi-view Visual Learning: A Knowledge Flow Perspective

Zhengming (Allan) Ding | Indiana University-Purdue University Indianapolis

Abstract: Multi-view data are extensively accessible nowadays thanks to various types of features, viewpoints, and different sensors. For example, the most popular commercial depth sensor Kinect uses both visible light and near-infrared sensors for depth estimation; automatic driving uses both visual and radar/lidar sensors to produce real-time 3D information on the road, and face analysis algorithms prefer face images from different views for high-fidelity reconstruction and recognition. All of them tend to facilitate better data representation in different application scenarios. This talk covers most multi-view visual data representation approaches from two knowledge flows perspectives, i.e., knowledge fusion and knowledge transfer, centered from conventional multi-view learning to zero-shot learning, and from transfer learning to few-shot learning.

About the Speaker: Zhengming Ding received the B.Eng. degree in information security and the M.Eng. degree in computer software and theory from the University of Electronic Science and Technology of China (UESTC), China, in 2010 and 2013, respectively. He received the Ph.D. degree from the Department of Electrical and Computer Engineering, Northeastern University, USA in 2018. He is a faculty member affiliated with the Department of Computer, Information and Technology, Indiana University-Purdue University Indianapolis since 2018. His research interests include transfer learning, multi-view learning and deep learning. He received the National Institute of Justice Fellowship during 2016-2018. He was the recipient of the best paper award (SPIE 2016) and the best paper candidate (ACM MM 2017). He is currently an Associate Editor of the Journal of Electronic Imaging (JEI) and IET Image Processing. He is a member of IEEE, ACM, and AAAI.

Mar 16

Discerning Code Changes From a Security Perspective

Yue Duan | Cornell University

This presentation will be delivered online. You may access the presentation on Monday, March 16th, at 3:30 pm CST via the following link:

Meeting ID: 782 675 4964

Abstract: Programs are not immutable. In fact, most programs are under constant changes for security (e.g, vulnerability fix) and non-security (e.g., new features) reasons. These code changes have exposed great security challenges. In this talk, I will present my unique approach that combines static/dynamic program analysis with other techniques including deep learning and virtual machine introspection (VMI), to understand code changes from a security perspective in mobile and PC software domains, and further solve real-world security issues. First, Android packers, as a set of code transformation techniques, have rendered existing malware detection techniques obsolete. I will talk about DroidUnpack, which is a VMI based Android packing analysis framework, to perform the first large-scale systematic study on Android packing techniques, and report some surprising findings. Second, Android third-party libraries (TPL) have become one of the major sources of Android security issues. We propose LibBandAid to automatically generate updates for TPLs in Android apps in a non-intrusive fashion. Third, I will present a novel technique named DeepBinDiff, an unsupervised deep neural network based program-wide code representation learning technique for binary diffing.

About the Speaker: Yue Duan is currently a Postdoctoral Researcher at Cornell University. He received his Ph.D in Computer Science from UC Riverside. He earned his M.S and B.S from Syracuse University and Xi'an Jiaotong University respectively. During his PhD studies, he interned at NEC Labs and Fujitsu Laboratories of America. Before that, he served as a system software engineer at Nvidia GPU driver team for 1.5 years. His research interests mainly lie in System Security, Mobile Security, Deep Learning and Blockchain. His work has been extensively published in leading security conferences including ACM CCS, NDSS and RAID. 

Mar 18

Information Extraction and Fusion for Improving Personal and Public Health Safety

Sarah Preum | University of Virginia

This presentation will be delivered online. You may access the presentation on Wednesday, March 18th, at 3:30 pm CST via the following link:

Meeting ID: 536 848 068

Abstract: We are increasingly interacting with pervasive applications from a wide variety of domains, including smart health, intelligent assistants, and smart cities. However, tremendous challenges remain about understanding the effects of such interactions and ensuring the safety of pervasive systems. In this talk, I will demonstrate how deep semantic inference of the data generated from interactive, pervasive systems can improve personal and public health safety. For personal health safety, I tackle the challenge of detecting conflicting information and interventions originating from various information sources and health applications, respectively. I will present novel textual inference, time-series prediction, and information fusion techniques to (i) detect and predict conflicts and (ii) deliver safe, interpretable, and personalized health interventions. For public health safety, I address the challenge of providing real-time decision support to emergency responders. I will demonstrate a novel, weakly-supervised information extraction solution that enables the protocol-driven decision support pipeline of an intelligent assistant for emergency response. Throughout the talk, I will demonstrate how I address the challenges of low training data, knowledge integration, and interpretability for safety-critical, pervasive applications with low tolerance of error and domain constraints. Finally, I will propose natural language processing, multi-modal data fusion, and knowledge engineering techniques to develop more capable intelligent assistance and enhance data-driven decision support systems.

About the Speaker: Sarah Preum is a Ph.D. candidate in the Department of Computer science at the University of Virginia (UVA), where she works with Professor John Stankovic. Sarah's research interest lies broadly in the intersection of artificial intelligence and cyber-physical systems, with a focus on increasing the safety and effectiveness of human-centric systems through information extraction and fusion. Sarah has developed novel natural language processing, knowledge integration, and temporal modeling techniques to provide personalized decision support for health safety. She received her M.Sc. in Computer Science at the University of Virginia in 2015. Sarah graduated from Bangladesh University of Engineering and Technology (BUET) in 2013 as a Summa Cum Laude with her B.Sc. degree in Computer Science and Engineering (CSE). Before joining UVA, she served as a lecturer in CSE at BUET. She also spent time at the Bosch Research and Technology Center as an AI research intern. Sarah is a recipient of the UVA Graduate Commonwealth Fellowship, the Adobe Research Graduate Scholarship, the NSF Smart and Connected Health Student Award, and the UVA Big Data Fellowship. Her work has been published in premier CS conference proceedings and journals, including ICDE, CIKM, PerCom, IPSN, ACM CSUR, and IEEE Trans. of CPS. More information about Sarah and her work can be found on her website:

Apr 24

Computational Models of Heuristics and Bias in Human Behavior

Jaelle Scheuerman | Tulane University

This presentation will be delivered online. You may access the presentation on Friday, April 24th, at 3:00 pm CST via the following link: . Please be sure to mute your microphone when you log on.

Abstract: Heuristics allow us to navigate a complex and uncertain world, react to new situations, and make decisions without needing all the information. While this is often helpful, using heuristics can sometimes lead to biased actions and potentially costly errors. Many algorithms are developed with the goal of improving human performance: either by offering recommendations or by seeking to predict and thereby mitigate human error. Such tools benefit when they can account for human bias and behavior, either through analysis of behavioral data in the design process or by incorporating computational cognitive models that simulate and predict behavior. This dissertation explores several novel computational approaches for simulating bias and heuristics in human behavior at different cognitive levels of abstraction. First, we develop a model of attentional bias in spatial auditory attention for the cognitive architecture, ACT-R. We then move to a more complex task, modeling confirmation bias that occurs when making decisions with noisy feedback. Finally, we consider behavior in a multi-agent setting, collecting and analyzing experimental data about heuristics in voting. In all three instances, we implement different computational models of human behavior to evaluate them on behavioral data. This work provides tools that support a better understanding of human behavior and enables the integration of bias and heuristics in human-aware computing, laying the groundwork for designing new computational approaches inspired by human heuristic behavior.

About the Speaker: Jaelle Scheuerman is a Ph.D. Candidate in the Computer Science department at Tulane University. In her research, she uses a combination of computational methods and behavioral data to inform the design of human-aware tools. Prior to pursuing her Ph.D., Jaelle received an M.S. in Human-Computer Interaction at Iowa State University and a B.S in Computer Science at South Dakota School of Mines.

Fall 2019 Colloquia

Check back soon for more information on the computer science seminar series. Unless otherwise noted, the seminars meet on Mondays at 4pm in Stanley Thomas 302. If you would like to receive notices about upcoming seminars, you can subscribe to the announcement listserv.

Sept 6

Cultivating Students' Moral Imagination in Science, Technology, and Engineering Courses

Emanuelle Burton | University of Illinois at Chicago

This event will be held on Friday, 9/6/2019, from 4:00 - 5:00 p.m. in Stanley Thomas, Room 316. Please note the special weekday and venue for this event.

Abstract: Many engineering and science fields accreditation bodies require student instruction on ethics education. While typically this takes the form of case studies relating to research conduct, we contend that a meaningful education in ethics requires more than just transmitting information. An effective course enables students to begin (or continue) the work of self-transformation into more thoughtful and conscientious citizens of the profession and the world. How can we create classrooms and curricula that create the conditions for our students' transformation? And how can we as instructors induce students to undertake the messy and challenging work that this transformation entails? This session will include both a brief talk and an open discussion. 

About the Speaker: Emanuelle Burton is a lecturer in ethics for the CS department at UIC. She holds a PhD in religion and literature form the University of Chicago. She is the co-author, with Judy Goldsmith, Nicholas Mattei, Sara-Jo Swiatek, and Corey Siler, of the forthcoming textbook "Understanding Technology Ethics Through Science Fiction," from MIT Press. Her courses have recently been featured in Wired Magazine ( and the Communications of the ACM (

Oct 21

Interdisciplinary Project Presentations

Victor Bankston and Avik Bhattacharya | Tulane University

Title: A Distributed Search for Graphs for Which Lovasz's Theta Is a Poor Approximation of the Independence Number
Speaker: Victor Bankston (Computer Science PhD Student, Tulane University)

Title: Improving Prediction of MHC Class II Antigen Presentation Using Conformational Stability Data
Speaker: Avik Bhattacharya (Computer Science PhD Student, Tulane University)

Oct 29

Machine Learning for Medical Image Processing

Dong Hye Ye | Marquette University

This event will be held on Tuesday, 10/29/2019, from 12:30 p.m. - 1:45 p.m. in Stanley Thomas, Room 302. Please note the special weekday and time for this event.

Abstract: Medical image processing is essential for clinical diagnosis by providing quantitative visualization and analysis of underlying anatomy. In recent years, it has become increasingly easy to gather large quantities of medical images. Processing these large image databases is key to unlocking a wealth of information with the potential to be used. However, both interpretation of that big data and connecting it to downstream medical image processing is still challenging. To tackle this challenge, I unlock the valuable prior knowledge from large image databases via machine learning techniques and use it to improve medical image processing. In this talk, I will present how machine learning can help medical image processing such as CT Metal Artifact Reduction, Organ Segmentation, and High-Throughput Microscopic Imaging.

About the Speaker: Dr. Dong Hye Ye is an Assistant Professor in Electrical and Computer Engineering at Marquette University. His research interests are in advancing image processing via machine learning. His publications have been awarded Best Paper at MICCAI-MedIA 2010, Best Paper Runner-Up at ICIP 2015, and Best Paper at EI-IMAWM 2018. During his PhD, Dong Hye conducted research at Section of Biomedical Image Analysis (SBIA) in Hospital of the University of Pennsylvania (HUP) and Microsoft Research Cambridge (MSRC). He received Bachelor’s degree from Seoul National University in 2007 and Master's degree from Georgia Institute of Technology in 2008.

Nov 1

Borda Count in Collective Decision Making: A Summary of Recent Results

Jörg Rothe | Heinrich-Heine-Universität Düsseldorf

This event will be held on Friday, 11/01/2019, from 4:00 p.m. - 5:00 p.m. in Stanley Thomas, Room 302. Please note the special weekday for this event. 

Abstract: Borda Count is one of the earliest and most important voting rules. Going far beyond voting, we summarize recent advances related to Borda in computational social choice and, more generally, in collective decision-making. We first present a variety of well-known attacks modeling strategic behavior in voting—including manipulation, control, and bribery—and discuss how resistant Borda is to them in terms of computational complexity. We then describe how Borda can be used to maximize social welfare when indivisible goods are to be allocated to agents with ordinal preferences. Finally, we illustrate the use of Borda in forming coalitions of players in a certain type of hedonic game. All these approaches are central to applications in artificial intelligence.

About the Speaker: Jörg Rothe received his diploma in 1991, his PhD in 1995, and his habilitation degree in 1999, each from Friedrich-Schiller-Universität Jena, Germany. In 1993-1994 he was a visiting scholar and in 1997-1998 a visiting assistant professor at the CS department of University of Rochester, each with a DAAD research fellowship for working with Lane A. Hemaspaandra. Since 2000 he has been a professor of computer science at Heinrich-Heine-Universität Düsseldorf, Germany; in 2013 he was a visiting professor at the CS department of Stanford University, and since 2014 he has been the chair of the CS department at Heinrich-Heine-Universität Düsseldorf. After receiving a DFG Heisenberg Fellowship in 2000 he has been the principal investigator of six DFG projects, a principal investigator in a EUROCORES project of the European Science Foundation (ESF), and was involved in various other international collaborative research projects. His research interests are in computational social choice, algorithmic game theory, fair division, and argumentation theory, typically focusing on the algorithmic and complexity-theoretic properties of the related problems.

Nov 20

Geometric Algorithms and Data Structures for Curves and Maps

Majid Mirzanezhad | Tulane University

Please join us for a PhD Prospectus presentation by Tulane computer science PhD student, Majid Mirzanezhad. This event will be held on Wednesday, 11/20/2019, at 3:00 p.m. in the Boggs Center for Energy and Biotechnology, Room 122. Please note the special weekday, time, and venue for this event.

Abstract: Because of its applications, Geographic Information Systems (GIS) interest researchers in various disciplines to develop methods and tools for a better understanding of the world and surrounding environment. In this prospectus, we consider several problems and future studies that involve applications in maps and networks in a geographic scene. We propose algorithmic methods for solving some important problems described in the following:

We first consider computing the Frechet distance for a special class of curve which is a very commonly used and popular metric for capturing the similarity between GPS trajectories, piecewise linear functions and generally any linear features on a map. The classical algorithm for computing this distance runs in quadratic time in terms of the total complexity of two piecewise linear curves. We propose several algorithms and a data structure that compute the Frechet distance substantially faster for the special case when the Frechet distance is relatively small. In particular, we give a linear-time greedy algorithm deciding and approximating the Frechet distance and a near-linear time algorithm computing the exact value of the distance between two curves in any constant dimension.

Next, we exploit the metric studied above for simplification purposes. We study how to simplify a polygonal curve on a map that may not need redundant details or high resolution. Simplification reduces the complexity of the input object and therefore speeds up any future computations under some approximation attained by itself in the preprocessing stage. We specifically consider the problem of computing an alternative polygonal curve with the minimum number of links whose distance to the input curve is at most some given real value. From the theoretical point of view, we prove that when the distance measure changes, the problem becomes either NP-hard or polynomially solvable. We also propose several exact and approximation algorithms when the placement of the output curve respects some degrees of freedom in the ambient space.

In our future studies, we intend to extend the simplification problem to geometric input trees and graphs since they represent a non-linear structure of features on a map such as rivers, watersheds, transportation networks and so forth. We obtain some preliminary results and end this prospectus with some interesting algorithmic problems in this context.

Dec 2

Uncertainty Visualization: From Metrics to Imagery

Kristi Potter | National Renewable Energy Laboratory (NREL)

Abstract: Uncertainty is ubiquitous in scientific data. Aberrations such as variability, error, and missing data are key characteristics providing insights into the reliability of a given dataset. Without this information, data is often considered incomplete; however many visualizations do not include uncertainty due to increased complexity in the visual design. In my own work, I often encounter uncertainty stemming from large-scale, multi-run simulations where the variability between simulation runs reflects the range of possible outcomes. My approach to these problems often include multiple linked-windows, color mapping, and contouring, as well as more sophisticated, but domain-specific methods.

In this talk, I will go over the basics of uncertainty characterization and the challenges in including uncertainty in data visualization. I will briefly cover the types of uncertainty and the mathematical metrics most often used to measure uncertainty for visualization purposes including descriptive statistics and probability distributions. I will also provide a short history of uncertainty visualization techniques and a small subset of modern approaches that are easily applied in readily available software. Finally, I discuss my own work in ensemble visualization including the tools and techniques used to produce the resulting visualizations.

About the Speaker: Dr. Kristin Potter is a Senior Scientist specializing in data visualization at the National Renewable Energy Lab (NREL). Her current research is focused on methods for improving visualization techniques by adding qualitative information regarding reliability to the data display. This work includes researching statistical measures of uncertainty, error, and confidence levels, and translating the semantic meaning of these measures into visual metaphors. She is also interested in topics related to decision making, performance visualization, method evaluation, and application specific techniques. Kristi has over 15 years of experience in visualization creation, design and deployment spanning multiple disciplines including atmospheric sciences, materials modeling, geographical mapping, and the humanities. Prior to joining NREL in 2017, she worked as a research computing consultant at the University of Oregon providing visualization services, computational training and education, and other support to researchers across campus, and as a research scientist at the University of Utah, working on projects related to the visualization of uncertainty and error in data. Her dissertation work focused on the visual representation of variability within ensemble suites of simulations covering multiple parameter settings and initial conditions. Her master's work developed the use of sketch-based methods for conveying levels of reliability in architectural renderings. Kristi is currently working in NERL's Insight Center on high-dimensional data visualization techniques and web-based deployment of visualization applications.






Spring 2019 Colloquia

Check back soon for more information on the computer science seminar series. Unless otherwise noted, the seminars meet on Mondays at 4pm in Stanley Thomas 302. If you would like to receive notices about upcoming seminars, you can subscribe to the announcement listserv.

Jan 14

Approximation Algorithms for Optimal Packing in Two and Three Dimensions

Helmut Alt | Visiting Professor, Tulane University

Abstract: Space efficient packing of geometric objects in two or three dimensions is a very natural problem which has interested mathematicians for centuries. Unfortunately, the computational complexity of finding space optimal packings seems to be very high. Even simple variants are NP-hard so that efficient approximation algorithms are called for. In the lecture, it will be shown how to approximate optimal packing for convex polygons in two and convex polyhedra in three dimensions if rigid motions are allowed for moving the objects. If only translations are allowed, we still can approximate the optimal packing of convex polygons. In three dimensions however, this problem seems to be much harder and we could only find algorithms for very special kinds of objects. This is joint work with Nadja Scharf.

About the Speaker: Helmut Alt studied mathematics since 1968 at Universitaet des Saarlandes, Germany. He graduated with a PhD which focused on complexity theory in 1976. He was a research associate at Universitaet des Saarlandes and an assistant professor at Pennsylvania State University. Since 1986 he was a Professor of Computer Science at Freie Universitaet Berlin, Germany. The focus of his work is on algorithms and complexity, in particular computational geometry.

Jan 23

Using Interactive Learning Activities to Address Challenges of Peer Feedback Systems

Amy Cook | Carnegie Mellon University

This event will be held on Wednesday, 1/23/2019, from 4:00 - 5:00 p.m. in Stanley Thomas, Room 302. Please note the special weekday for this event.

Abstract: Project-based learning helps prepare students for jobs by providing not only the technical experience of completing a project, but also the soft skills such as teamwork and communication that employers desire. Peer feedback, where students critique each other’s work, is an essential aspect of project-based learning. However, students often struggle to engage in peer feedback, to improve the quality of feedback they provide, and to reflect on the feedback they receive. My research explores how digital systems and interactive learning activities can improve the peer feedback process.

About the Speaker: Amy Cook is a PhD candidate in the Human-Computer Interaction Institute at Carnegie Mellon University. Her research lies at the intersection of human-computer interaction and STEM education. Her work involves designing both digital systems and learning activities to facilitate effective classroom interaction.

Jan 28

What Happens If You Postpone Your Decision?

Christer Karlsson | South Dakota School of Mines & Technology

Abstract: Many of our algorithms when it comes to decision making are constructed using a greedy approach. The algorithms often conduct an evaluation, make an early decision on what path to take upon what seems to be most profitable at that moment. What happens if we first walk down both the path less traveled and the most promising one, and at the next intersection evaluate the progress and sees what looks most promising? I will describe work done on trying to arrange heterogeneous processes within a message passing interface, as well as work conducted using a delayed greedy approach on decision tree classifiers. I will try to answer questions like: What befits are there? What are the costs? Is it worth the effort?

About the Speaker: Dr. Christer Karlsson has had a long and winding road. He started his career as an officer in the Swedish army, where he among other things, spent almost nine months with the peace keeping forces in the Balkans, followed by almost 3 years as a teacher at the War Academy, where he was responsible for teaching computer science, electronics and ballistics. He came to the United States in 1999, and became a citizen in 2006. Dr. Karlsson earned his Ph.D. in computer science at Colorado School of Mines in 2012 while working mostly on optimization problems for message passing on systems with heterogeneous nodes. He has since January 2013 worked as an Assistant Professor at South Dakota School of Mines and Technology.

Feb 11

Deep Adversarial Learning

Jihun Hamm | Ohio State University

Abstract: Adversarial machine learning in a broad sense is the study of machine learning theory and algorithms in environments with multiple agents that have different goals. This broad definition of adversarial machine learning includes narrow-sense adversarial machine learning which studies the vulnerability of learning algorithms in the presence of adversarial data perturbation during the training or the testing phases. However, there are many learning problems that involve multiple agents and objectives in different contexts. For example, generative adversarial nets, domain adaptation, data sanitization, attacking/defending deep neural net, and hyperparameter optimization can all be formulated as adversarial learning problems. In this talk, I will introduce several applications of adversarial learning focusing on privacy preservation, and then discuss the challenges of adversarial optimization and propose new solutions. I will also present ongoing and future research in this direction.

About the Speaker: Dr. Hamm is a Research Scientist at the Department of CSE, the Ohio State University. He received his Ph.D. from the University of Pennsylvania in 2008 with a focus on nonlinear dimensionality reduction and kernel methods, and was a post-doctoral researcher at the Penn medical school working on machine learning applied to medical data analysis. Dr. Hamm's recent research is focused on machine learning theory and algorithms in adversarial settings and in the field of security and privacy. He has received the best paper award from MedIA-MICCAI (2010), was a finalist for MICCAI Young Scientist Publication Impact Award (2013), and is a recipient of the Google Faculty Research Award (2015). He has served as a reviewer for JMLR, IEEE TPAMI/TNN/TIP/TIFS, NN, PR, IJPR, and others, and also as a program committee member for NIPS, ICML, AAAI, and AISTATS.

Feb 20

Large-Scale Analysis of Online Social Networks for Social Understanding, Crisis Informatics, and Information Security

Cody Buntain | New York University

This event will be held on Wednesday, 2/20/2019, from 4:00 - 5:00 p.m. in Stanley Thomas, Room 302. Please note the special weekday for this event.

Abstract: The volume of data now available in online social networks has accelerated research into social good and advances in understanding both the real and virtual world, but this volume also makes finding useful information difficult and facilitates the spread of malicious content. In this talk, I discuss these intersectional issues, wherein online social networks simultaneously enable and corrupt the flow of information in society, and I present my research into potential solutions.

First, I describe two computational social science efforts, demonstrating how these online spaces enable understanding but also lead to polarization and conflict. In this description, I show how politicized topics evolve and how malicious actors have tried to leverage online platforms to increase polarization. Second, I present research on information retrieval and real-time summarization for making the information contained in these online systems more accessible to those individuals who most need it. This work includes real-time stream processing of social networking data to extract timely and concise information and a large-scale machine learning effort for identifying information needs and priorities for crisis responders. Third, to combat exploitation in the online ecosystem, I discuss a related machine learning effort to automate credibility assessment in online discussions.

I conclude with an overview of my research agenda for advancing my work into the larger, multi-platform information ecosystem. My research in this context includes applications of multi-view learning and social contagion for tracking topical and community evolution across platforms and time; unifying these cross-platform interactions into consistent models of information security, reputation, and resiliency; and integrating multi-modal content across these platforms to enhance crisis informatics.

About the Speaker: Cody Buntain received his PhD from the Computer Science Department at the University of Maryland and is a postdoctoral researcher with New York University's Social Media and Political Participation Lab. His primary research areas apply large-scale computational methods to social media and other online content, specifically studying how individuals engage socially and politically and respond to crises and disaster in online spaces. Current problems he is studying include cross-platform information flows, network structures, temporal evolution/politicization of topics, misinformation, polarization, and information quality. Recent publications include papers on influencing credibility assessment in social media, consistencies in social media's response to crises, the disability community's use of social networks for political participation, and characterizing gender and direction in online harassment.

Feb 22

3D Flood Extent Detection From UAV Imagery in Support of Flood Management

Leila Hashemi-Beni | North Carolina AT&T State University

This event will be held on Friday, 2/22/2019, from 4:00 - 5:00 p.m. in Stanley Thomas, Room 302. Please note the special weekday for this event.

Abstract: Unmanned aerial vehicles (UAVs) offer a great potential alternative to conventional platforms for acquiring high-resolution remote sensing data at lower cost and increased operational flexibility for flood modeling and management. UAV data analytic is a key step in the development of UAV remote sensing to correctly predict the extent of the flood, supporting emergency-response planning, and providing damage assessment in both spatial and temporal measurements. In spite of recent developments in UAV data collection technologies that have made them “lighter, smaller and simpler,” more sophisticated processing is required to compensate for the necessarily limited performance of these platforms and sensors. This talk will focus on the data processing on UAVs imagery to construct the inundation areas in 3D using Structure from Motion as well as Deep learning methods.

About the Speaker: Leila Hashemi-Beni is an assistant professor of Geomatics at the Department of Built Environment at College of Science and Technology, North Carolina A&T State University. She holds a BSc. and a MSc. in Civil-Surveying Engineering (Geomatics) and a PhD in Geomatics Science. Her research experience and interests span the areas of 3D data modeling, UAV and satellite remote sensing and data analytics, automatic matching and change detection between various datasets and developing GIS and remote sensing methodologies for different applications. She is currently working as a PI/Co-PI on 5 externally-funded projects totaling $1.7M, including three from the National Science Foundation, and two from the Energy Industry. She has participated to the 3D Nation Elevation Requirements and Benefits Study by NOAA and USGS, North Carolina Chapter.

Feb 25

Learning to Cache: Accuracy or Speed?

Jian Li | University of Massachusetts, Amherst

Abstract: Caching is fundamental to and has been used in many applications, ranging from computing systems on chip-level to content distribution. It becomes particularly important due to the rise of video streaming, which is the dominant application in today's Internet. The content accessed by the user is usually delivered by a large-scale networked system called a content distribution network (CDN). CDNs usually use caching as a mean to reduce access latency as well as bandwidth requirement at a central content repository.

In the talk, I will discuss an online learning approach to study the fundamental performance limit of caching algorithms. Typical analysis of caching algorithms using the metric of steady state hit probability under a stationary request process does not account for performance loss under a variable request arrival process. I instead conceptualize caching algorithms as online learning algorithms, and use this vantage point to study their adaptability from two perspectives: (a) the accuracy of learning a fixed popularity distribution; and (b) the speed of learning items' popularity. I propose a learning error metric representing both how quickly and how accurately an algorithm learns the optimum, and use this to determine the tradeoff between these two objectives of many popular caching algorithms. Informed by the analytical results, I propose a novel hybrid algorithm, Adaptive-LRU, that learns both faster and better the changes in the popularity. I show numerically that it also outperforms all other candidate algorithms when confronted with either a dynamically changing synthetic request process or using real world traces. I will conclude with thoughts on using learning-based approach to build better models, algorithms, and systems to support big data applications.

About the Speaker: Jian Li is a postdoctoral research associate at University of Massachusetts Amherst. He received his Ph.D. in Computer Engineering from Texas A&M University in December, 2016, and B.E. in Electrical Engineering from Shanghai Jiao Tong University in June, 2012. His current research interests lie broadly in the interplay of large scale networked systems and big data analytics focusing on advancing machine learning, data analysis, game theory, and signal processing technologies in big data applications.

Mar 11

Learning from Spatial-Temporal-Networked Data: Dynamics Modeling, Representation Learning, and Applications

Yanjie Fu | Missouri University of Science and Technology

Abstract: The pervasiveness of mobile, IoT, and sensing technologies have connected humans, physical worlds, and cyber worlds into a grand human-social-technological system. This system consists of users and systems that interact with each other in real time and at different locations. Therefore, big spatial-temporal-networked behavioral data have been accumulated from mobile devices and App services. In this talk, I will first introduce what are spatial-temporal-networked data and why it is difficult to make sense of spatial-temporal-networked data.

Then, I will focus on the dynamics, patterns, and applications of spatial-temporal-networked data, including (1) modeling dynamics and annotating semantics of spatial-temporal-networked behaviors; (2) learning deep representations of spatial-temporal-networked behaviors; (3) their applications to smart transportation systems and adaptive human-technology interaction. Finally, I will conclude the talk and present the big picture on developing close-looped intelligent and trustworthy data science systems.

About the Speaker: Dr. Yanjie Fu received his Ph.D. degree from Rutgers University in 2016, the B.E. degree in Computer Science from University of Science and Technology of China in 2008, and the M.E. degree in Computer Engineering from Chinese Academy of Sciences in 2011. He is currently an Assistant Professor at Missouri S&T (University of Missouri-Rolla).

His general interests include data mining and big data analytics. His recent research focuses are collective, dynamic, and structured machine learning, spatial-temporal-networked data mining, automated data science systems, with applications to big data problems, including intelligent transportations, user and system behavior analysis, power grids, recommender systems, disaster and emergency management. He has research experience in industry research labs, such as IBM Thomas J. Watson Research Center and Microsoft Research Asia. He has published prolifically in refereed journals and conference proceedings, such as IEEE Transactions on KDE, ACM Transactions on KDD, IEEE Transactions on MC, ACM Transactions on IST, SIGKDD Conference, AAAI Conference, IJCAI Conference.

Mar 18

The Human Factor in Computing

David Luginbuhl | Air Force Research Laboratory, Maxwell AFB, AL

Abstract: It is easy for us as computer scientists to be so focused on what machines can do that we lose sight of human involvement in the process of computing. To be sure, there has been plenty written about and studied on human-centered computing/user-centered design, and human-machine teaming continues to grow as an interest area as we anticipate autonomous or semi-autonomous systems finding their way into society at large. But these are primarily concerned with the human as user. Humans are really a much larger part of the computing landscape.

In this talk, I will examine several roles that humans play as a part of the computing process. I’ll discuss why those roles are important to consider, and I’ll survey research that addresses each of these roles. I’ll also look at implications for us as computer scientists in comprehending these different roles.

My intent is to demonstrate the need for computer scientists to understand and appreciate the intersection of our field with the human sciences (e.g., psychology, physiology, cognitive science, sociology). Only by taking a comprehensive, multidisciplinary approach can we hope to design machines that will interact effectively with us on an individual basis and that will be beneficial to society.

About the Speaker: Dr. David Luginbuhl’s thirty-five years of professional experience have included posts in higher education, as well as management and leadership in government research and development organizations, both as an Air Force officer and a civilian. He has taught at the Air Force Institute of Technology, Western Carolina University, and the Air War College. He has worked for the Air Force Research Laboratory in a number of posts, including program manager at the Air Force Office of Scientific Research, Assistant Chief Scientist at the 711th Human Performance Wing, and AFRL Chair at Air University. Dr. Luginbuhl received his Doctor of Philosophy degree in computer science from the University of Illinois at Urbana-Champaign and his master’s and bachelor’s degrees in math and computer science from Florida State University.

Mar 29

Code Obfuscation: Why Is This Still a Thing?

Christian Collberg | University of Arizona

This event will be held on Friday, 3/29/2019, from 4:00 - 5:00 p.m. in Stanley Thomas, Room 302. Please note the special weekday for this event.

Abstract: Early developments in code obfuscation were chiefly motivated by the needs of Digital Rights Management (DRM). Other suggested applications included intellectual property protection of software and code diversification to combat the monoculture problem of operating systems.

Code obfuscation is typically employed in security scenarios where an adversary is in complete control over a device and the software it contains and can tamper with it at will. We call such situations the Man-At-The-End (MATE) scenario. MATE scenarios are the best of all worlds for attackers and, consequently, the worst of all worlds for defenders: Not only do attackers have physical access to a device and can reverse engineer and tamper with it at their leisure, they often have unbounded resources (time, computational power, etc.) to do so. Defenders, on the other hand, are often severely constrained in the types of protective techniques available to them and the amount of overhead they can tolerate. In other words, there is an asymmetry between the constraints of attackers and defenders. Moreover, DRM is becoming less prevalent (songs for sale on the Apple iTunes Store are no longer protected by DRM, for example);there are new cryptographically-based obfuscation techniques that promise provably secure obfuscation; secure enclaves are making it into commodity hardware, providing a safe haven for security sensitive code; and recent advances in program analysis and generic de-obfuscation provide algorithms that render current code obfuscation techniques impotent.

Thus, one may reasonably ask the question: "Is Code Obfuscation Still a Thing?"

One of the reasons for this resurgence of code obfuscation as a protective technology is that, more and more, we are faced with applications where security-sensitive code needs to run on unsecured endpoints. In this talk we will show MATE attacks that appear in many novel and unlikely scenarios, including smart cars, smart meters, mobile applications such as Snapchat and smartphone games, Internet of Things applications, and ad blockers in web browsers. We will furthermore show novel code obfuscation techniques that increase the workload of attackers and which, at least for a time, purport to restore the symmetry between attackers and defenders.

About the Speaker: Christian Collberg is a Professor in the Department of Computer Science at the University of Arizona. Prior to arriving in Tucson he worked at the University of Auckland, New Zealand, and before that got his Ph.D. from Lund University, Sweden. He has also held a visiting position a the Chinese Academy of Sciences in Beijing, China, and taught courses at universities in Russia and Belarus.

Dr. Collberg's main research interest is the so-called Man-At-The-End Attack which occurs in settings where an adversary has physical access to a device and compromises it by tampering with its hardware or software. He is the co-author of Surreptitious Software: Obfuscation, Watermarking, and Tamperproofing for Software Protection, published in Addison-Wesley's computer security series. It has also been translated into Portuguese and Chinese.

In addition to his security research, Dr. Collberg is an advocate for Reproducibility, Repeatability, and Sharing in Computer Science. He maintains the site which aims to be the most authoritative and complete catalog of research artifacts (e.g., code and data) related to Computer Science publications.

Apr 1

Protein Structure Analysis and Comparison: Identifying Regions of Similarity Using a Graph Analysis of Underlying Structural Information

Aaron Maus | University of New Orleans

Abstract: The study and analysis of proteins and their structures is arguably one of the most important endeavors in modern biology with major applications for both the basic understanding of biology and the development and design of the next generation of medicines. In the pursuit of understanding proteins, it is integral to be able to compare and analyze their structures. Whether for the analysis of the protein structure prediction effort, to study conformational changes of the same protein, or to study similar conformations of evolutionarily related proteins, the comparison and analysis of the complex three-dimensional shapes of protein structures is a difficult yet fundamental task. All existing protein structure comparison methods return a score for similarity, but few give an underlying look at the parts of the structures which match. By converting the underlying geometric information of two structures into a graph, a maximum clique analysis can be used to identify the largest non-overlapping regions of similarity between structures. These regions can easily be visualized, and they lend themselves to a deep analysis of the underlying similarities between structures, complementing existing methods of comparison by providing additional information that is not readily available. Applications of this technique will be presented, and it will also be shown that even though this method relies on solutions to an NP-complete problem, these problems are feasible in this context.

About the Speaker: Aaron Maus is a doctoral candidate working with Dr. Christopher Summa in the Engineering and Applied Science program at the University of New Orleans. His research interests include proteins structure prediction, the design of energy functions for structural refinement, and the comparison and analysis of protein structures.

May 7

Discourse Models for Multimodal Communication

Malihe Alikhani | Rutgers University

This event will be held on Tuesday, 5/7/2019, from 10:30 a.m. - 11:30 a.m. in Stanley Thomas, Room 316. Please note the special weekday and time for this event.

Abstract: The integration of textual and visual information is fundamental to the way people communicate. My hypothesis is that despite the differences of the visual and linguistic communication, the two have similar intentional, inferential and contextual properties, which can be modeled with similar representations and algorithms. I present three successful case studies where natural language techniques provide a useful foundation for supporting user engagement with visual communication. Finally, I propose using these findings for designing interactive systems that can communicate with people using a broad range of appropriate modalities.

About the Speaker: Malihe Alikhani is a 4th year Ph.D. student in the department of computer science at Rutgers University, advised by Prof. Matthew Stone. She is pursuing a Certificate in Cognitive Science through the Rutgers Center for Cognitive Science and holds a BA and MA in mathematics. Her research aims at teaching machines to understand and generate multimodal communication. She is the recipient of the fellowship award for excellence in computation and data sciences from Rutgers Discovery Informatics Institute in 2018.