Check back soon for more information on the computer science seminar series. Unless otherwise noted, the seminars meet on Mondays at 4pm in Stanley Thomas 302. If you would like to receive notices about upcoming seminars, you can subscribe to the announcement listserv.
Fabio Miranda | New York University
Abstract: Over the past decade, technological innovations have produced a wealth of large and complex data sets on almost every aspect of human life, from natural science to business and social science. The analysis of this data is usually an exploratory process in which domain expertise plays an important role. It is, therefore, essential to integrate the user into the analysis loop, enabling them to formulate hypotheses and gain actionable insights into domain-specific problems. Interactive visualization is central in the support of this process, but the scale and complexity of the data present several challenges. My research focuses on proposing new methods and systems that allow for the interactive visual analysis of large data of different types, such as time-series, spatio-temporal, geometry, and image data. By combining visualization, machine learning, data management, and computer graphics, my work tackles fundamental challenges in data science, enabling effective analysis of large data to untangle real-world problems. In this talk, I will present my most recent contributions in the interactive visual analysis and exploration of large urban data, motivated by problems such as urban noise, neighborhood characterization, accessibility, and shadow impact on public spaces. The techniques and tools have been used by different domain experts, including urban planners, architects, occupational therapists, and acoustics researchers, allowing them to engage in data-driven science to better understand cities.
About the Speaker: Fabio Miranda is a postdoctoral research associate at the Visualization and Data Analytics Center (VIDA) and the Center for Urban Science and Progress (CUSP) at New York University. He received his PhD from New York University in 2018, advised by Professor Claudio T. Silva. During his PhD studies, he completed internships at Argonne National Laboratory, IBM Research, AT&T Labs Research, and Sandia National Labs. His research proposes new techniques that allow for the interactive visual analysis of large-scale data. He has worked closely with domain experts from different fields, from urban planning to occupational therapy, and the outcome of these collaborations includes not only research published in leading visualization, database, HCI, and AI venues, but also systems that were made available to experts in academia, industry, and government agencies. His work has also received extensive coverage from different media outlets, including The New York Times, The Economist, Architectural Digest, Curbed, among others.
Dieter Pfoser | George Mason University
This talk will be held on Wednesday, February 12th, at 4:00 p.m. in Stanley Thomas 302. Please note the special weekday for this event.
Abstract: Trajectory datasets have sparked a lot of research ranging from physical database work and graph algorithms to becoming a valuable resource in urban science. We will initially discuss fundamental algorithmic efforts related to spatiotemporal database queries and using trajectories to calculate travel times for road networks and their effect on shortest path calculations. While interesting as a data type, trajectories are also an important resource when it comes to understanding human mobility and what goes on in a city. Here, we will look at how people’s itineraries can be used to compile a trajectory-driven travel guide and how we identify underlying transportation networks and respective travel choices. The talk concludes with current challenges and research directions.
About the Speaker: Dr. Dieter Pfoser is a professor and chair of the Dept. of Geography and GeoInformation Science at George Mason University. He received his PhD in computer science from Aalborg University, Denmark in 2000. At GMU he teaches courses related to geospatial data management, Linked Data, Web application development using open-source software, and data visualization. His research interests include data management, data mining for spatial and spatiotemporal data, graph algorithms for dynamic networks, and user-generated content, e.g., map-matching and map construction algorithms. Over the years, Dr. Pfoser’s research has been supported by grants from NSF, DARPA, NGA, DHS, and the European Commission. More information on his work can be found at http://www.dieter.pfoser.org.
Hao Wang | University of Toronto
This talk will be held on Thursday, February 13th, at 2:00 p.m. in Norman Mayer, Room # 200A. Please note the special weekday, time, and venue for this event.
Abstract: In the era of Internet of Things, mobile computing and Big Data, millions of sensors and mobile devices are constantly generating massive volumes of data. To utilize the vast amount of data without violating data privacy, Federated Learning has emerged as a new paradigm of distributed machine learning that orchestrates model training across mobile devices. In this talk, I will first introduce the current challenges in distributed machine learning, and then present my recent work on statistical heterogeneity in federated learning, and distributed machine learning on a serverless architecture. Specifically, I will talk about applying reinforcement learning to optimize distributed machine learning by learning the best choice for task scheduling and resource provisioning.
About the Speaker: Hao Wang is a 5th year Ph.D. candidate at the University of Toronto under the supervision of Professor Baochun Li. Hao received both of his B.E. degree in Information Security and M.E. degree in Software Engineering from Shanghai Jiao Tong University in 2012 and 2015, respectively. His research interests include large-scale data analytics, distributed machine learning, and datacenter networking. He has published 16 papers (including eight first-author papers) in prestigious networking and system conferences and journals, such as INFOCOM, SoCC, TPDS, and ToN. For more information about Hao, please visit https://www.haow.ca/.
Aron Culotta | Illinois Institute of Technology
Abstract: Online social networks present exciting opportunities for conducting multidisciplinary research in human sciences such as public health, sociology, political science, and marketing. However, properly analyzing this noisy, non-representative data requires advances in natural language processing, network analysis, machine learning, and causal inference. In this talk, I will summarize our recent progress in three fundamental areas that support this growing field: (1) controlling for confounds in text classification; (2) using distant supervision to train machine learning models; (3) using causal inference to understand how language affects perception. Applications discussed will include smoking cessation, deceptive marketing, political sentiment, and disaster management.
About the Speaker: Aron Culotta is an Associate Professor of Computer Science at the Illinois Institute of Technology in Chicago, where he leads the Text Analysis in the Public Interest Lab and co-directs the bachelor's and master's programs in artificial intelligence. He obtained his Ph.D. in Computer Science from the University of Massachusetts, Amherst in 2008, where he developed machine learning algorithms for natural language processing. He was a Microsoft Live Labs Fellow, completed research internships at IBM, Google, and Microsoft Research, received paper awards at AAAI and CSCW, and is a Program co-Chair for ICWSM-2020. His research is supported by several NSF-funded collaborations with researchers in public health, political science, marketing, and emergency management.
Rupert Freeman | Microsoft Research
Abstract: Even in the age of big data and machine learning, human knowledge and preferences still play a large part in decision making. For some tasks, such as predicting complex events like recessions or global conflicts, human input remains a crucial component, either in a standalone capacity or as a complement to algorithms and statistical models. In other cases, a decision maker is tasked with utilizing human preferences to, for example, make a popular decision over an unpopular one. However, while often useful, eliciting data from humans poses significant challenges. First, humans are strategic, and may misrepresent their private information if doing so can benefit them. Second, when decisions affect humans, we often want outcomes to be fair, not systematically favoring one individual or group over another.
In this talk, I discuss two settings that exemplify these considerations. First, I consider the participatory budgeting problem in which a shared budget must be divided among competing public projects. Building on classic literature in economics, I present a class of truthful mechanisms and exhibit a tradeoff between fairness and economic efficiency within this class. Second, I examine the classic online learning problem of learning with expert advice in a setting where experts are strategic and act to maximize their influence on the learner. I present algorithms that incentivize truthful reporting from experts while achieving optimal regret bounds.
About the Speaker: Rupert Freeman is a postdoc at Microsoft Research New York City. Previously, he received his Ph.D. from Duke University under the supervision of Vincent Conitzer. His research focuses on the intersection of artificial intelligence and economics, particularly in topics such as resource allocation, voting, and information elicitation. He is the recipient of a Facebook Ph.D. Fellowship and a Duke Computer Science outstanding dissertation award.
Wei Zhang | Microsoft Azure Cloud
This talk will be held on Wednesday, March 4th, at 4:00 p.m. in Stanley Thomas 302. Please note the special weekday for this event.
Abstract: Network function virtualization (NFV) allows traditional hardware middleboxes such as routers and firewalls to run as software on commodity servers, which enables complex, flexible and customized network services. Unfortunately software is typically slower than hardware. In addition, different network flows may need to be steered into different sequences of network functions (called service chains) with customized needs. Furthermore, as network functions become more complex, we need to bring together advances in OS and networking to build efficient and scalable network functions.
In this talk, I will describe an efficient and flexible data plane platform - openNetVM with zero extra packet copy, which can process packets at line rate. Next, I will show a management controller in data plane, Flurries which can provide flow-level performance management and customized service chain. Then, I will describe a software-based load balancer as an example to show how to build efficient and scalable network functions on top of NFV platform. Finally, I will conclude the talk with a brief discussion of my future research plan.
About the Speaker: Wei Zhang is currently a R&D engineer at Microsoft. In August 2018, she finished her Ph.D degree in the Department of Computer Science at the George Washington University. Her projects cover various aspects of computer network and system issues aiming at building efficient, scalable, and flexible network platforms for software-based network functions and developing efficient resource management and scheduling solutions for cloud systems. Her research interests include cloud computing, operating system, distributed system and network system, software & hardware co-design, and resource disaggregation. Her research has appeared in a number of premier conferences and workshops in networking and cloud systems, including CoNext, SIGCOMM, Middleware, SOSR and HotCloud. She received a Best Paper Award from ICAC 2016, Cisco scholarship 2016, second prize in Geni networking competition 2015, Beihang outstanding graduate student and national scholarship in 2014. She has taken research internships in HP labs, AT&T labs research, Futurewei, VMware, Baidu, and Oracle in system, networking and cloud groups.
Zhengming (Allan) Ding | Indiana University-Purdue University Indianapolis
Abstract: Multi-view data are extensively accessible nowadays thanks to various types of features, viewpoints, and different sensors. For example, the most popular commercial depth sensor Kinect uses both visible light and near-infrared sensors for depth estimation; automatic driving uses both visual and radar/lidar sensors to produce real-time 3D information on the road, and face analysis algorithms prefer face images from different views for high-fidelity reconstruction and recognition. All of them tend to facilitate better data representation in different application scenarios. This talk covers most multi-view visual data representation approaches from two knowledge flows perspectives, i.e., knowledge fusion and knowledge transfer, centered from conventional multi-view learning to zero-shot learning, and from transfer learning to few-shot learning.
About the Speaker: Zhengming Ding received the B.Eng. degree in information security and the M.Eng. degree in computer software and theory from the University of Electronic Science and Technology of China (UESTC), China, in 2010 and 2013, respectively. He received the Ph.D. degree from the Department of Electrical and Computer Engineering, Northeastern University, USA in 2018. He is a faculty member affiliated with the Department of Computer, Information and Technology, Indiana University-Purdue University Indianapolis since 2018. His research interests include transfer learning, multi-view learning and deep learning. He received the National Institute of Justice Fellowship during 2016-2018. He was the recipient of the best paper award (SPIE 2016) and the best paper candidate (ACM MM 2017). He is currently an Associate Editor of the Journal of Electronic Imaging (JEI) and IET Image Processing. He is a member of IEEE, ACM, and AAAI.
Yue Duan | Cornell University
This presentation will be delivered online. You may access the presentation on Monday, March 16th, at 3:30 pm CST via the following link:
Meeting ID: 782 675 4964
Please be sure to mute your microphone when you log on.
Abstract: Programs are not immutable. In fact, most programs are under constant changes for security (e.g, vulnerability fix) and non-security (e.g., new features) reasons. These code changes have exposed great security challenges. In this talk, I will present my unique approach that combines static/dynamic program analysis with other techniques including deep learning and virtual machine introspection (VMI), to understand code changes from a security perspective in mobile and PC software domains, and further solve real-world security issues. First, Android packers, as a set of code transformation techniques, have rendered existing malware detection techniques obsolete. I will talk about DroidUnpack, which is a VMI based Android packing analysis framework, to perform the first large-scale systematic study on Android packing techniques, and report some surprising findings. Second, Android third-party libraries (TPL) have become one of the major sources of Android security issues. We propose LibBandAid to automatically generate updates for TPLs in Android apps in a non-intrusive fashion. Third, I will present a novel technique named DeepBinDiff, an unsupervised deep neural network based program-wide code representation learning technique for binary diffing.
About the Speaker: Yue Duan is currently a Postdoctoral Researcher at Cornell University. He received his Ph.D in Computer Science from UC Riverside. He earned his M.S and B.S from Syracuse University and Xi'an Jiaotong University respectively. During his PhD studies, he interned at NEC Labs and Fujitsu Laboratories of America. Before that, he served as a system software engineer at Nvidia GPU driver team for 1.5 years. His research interests mainly lie in System Security, Mobile Security, Deep Learning and Blockchain. His work has been extensively published in leading security conferences including ACM CCS, NDSS and RAID.
Sarah Preum | University of Virginia
This presentation will be delivered online. You may access the presentation on Wednesday, March 18th, at 3:30 pm CST via the following link:
Meeting ID: 536 848 068
Please be sure to mute your microphone when you log on.
Abstract: We are increasingly interacting with pervasive applications from a wide variety of domains, including smart health, intelligent assistants, and smart cities. However, tremendous challenges remain about understanding the effects of such interactions and ensuring the safety of pervasive systems. In this talk, I will demonstrate how deep semantic inference of the data generated from interactive, pervasive systems can improve personal and public health safety. For personal health safety, I tackle the challenge of detecting conflicting information and interventions originating from various information sources and health applications, respectively. I will present novel textual inference, time-series prediction, and information fusion techniques to (i) detect and predict conflicts and (ii) deliver safe, interpretable, and personalized health interventions. For public health safety, I address the challenge of providing real-time decision support to emergency responders. I will demonstrate a novel, weakly-supervised information extraction solution that enables the protocol-driven decision support pipeline of an intelligent assistant for emergency response. Throughout the talk, I will demonstrate how I address the challenges of low training data, knowledge integration, and interpretability for safety-critical, pervasive applications with low tolerance of error and domain constraints. Finally, I will propose natural language processing, multi-modal data fusion, and knowledge engineering techniques to develop more capable intelligent assistance and enhance data-driven decision support systems.
About the Speaker: Sarah Preum is a Ph.D. candidate in the Department of Computer science at the University of Virginia (UVA), where she works with Professor John Stankovic. Sarah's research interest lies broadly in the intersection of artificial intelligence and cyber-physical systems, with a focus on increasing the safety and effectiveness of human-centric systems through information extraction and fusion. Sarah has developed novel natural language processing, knowledge integration, and temporal modeling techniques to provide personalized decision support for health safety. She received her M.Sc. in Computer Science at the University of Virginia in 2015. Sarah graduated from Bangladesh University of Engineering and Technology (BUET) in 2013 as a Summa Cum Laude with her B.Sc. degree in Computer Science and Engineering (CSE). Before joining UVA, she served as a lecturer in CSE at BUET. She also spent time at the Bosch Research and Technology Center as an AI research intern. Sarah is a recipient of the UVA Graduate Commonwealth Fellowship, the Adobe Research Graduate Scholarship, the NSF Smart and Connected Health Student Award, and the UVA Big Data Fellowship. Her work has been published in premier CS conference proceedings and journals, including ICDE, CIKM, PerCom, IPSN, ACM CSUR, and IEEE Trans. of CPS. More information about Sarah and her work can be found on her website: http://www.cs.virginia.edu/~sp9cx/
Jaelle Scheuerman | Tulane University
This presentation will be delivered online. You may access the presentation on Friday, April 24th, at 3:00 pm CST via the following link: https://tulane.zoom.us/j/480228077 . Please be sure to mute your microphone when you log on.
Abstract: Heuristics allow us to navigate a complex and uncertain world, react to new situations, and make decisions without needing all the information. While this is often helpful, using heuristics can sometimes lead to biased actions and potentially costly errors. Many algorithms are developed with the goal of improving human performance: either by offering recommendations or by seeking to predict and thereby mitigate human error. Such tools benefit when they can account for human bias and behavior, either through analysis of behavioral data in the design process or by incorporating computational cognitive models that simulate and predict behavior. This dissertation explores several novel computational approaches for simulating bias and heuristics in human behavior at different cognitive levels of abstraction. First, we develop a model of attentional bias in spatial auditory attention for the cognitive architecture, ACT-R. We then move to a more complex task, modeling confirmation bias that occurs when making decisions with noisy feedback. Finally, we consider behavior in a multi-agent setting, collecting and analyzing experimental data about heuristics in voting. In all three instances, we implement different computational models of human behavior to evaluate them on behavioral data. This work provides tools that support a better understanding of human behavior and enables the integration of bias and heuristics in human-aware computing, laying the groundwork for designing new computational approaches inspired by human heuristic behavior.
About the Speaker: Jaelle Scheuerman is a Ph.D. Candidate in the Computer Science department at Tulane University. In her research, she uses a combination of computational methods and behavioral data to inform the design of human-aware tools. Prior to pursuing her Ph.D., Jaelle received an M.S. in Human-Computer Interaction at Iowa State University and a B.S in Computer Science at South Dakota School of Mines.