Department of Computer Science Abstracts

3D Go

Aaron Yang, Kai Yu

Board games have come to be a fun and traditional way for many people to get together to strategize or just have fun in general. Using Unity and the resources it provides, we integrate the traditional Eastern Asian game Go into 3D. By using a 3x3x3 or a 5x5x5 cube as a board, we aim to incorporate the 3D elements into Go in multiple layers. Visualization of the layers would be unique in a 3D sense, so we incorporated several elements in our game such as camera components to make this easier. Capturing stones would be more complex in structure because there are many layers to capture. Players will be able to use critical thinking skills and think of creative methods to figure out how to capture multiple layers which is more complex. Another additional thing is that newcomers would be able to try out the game of Go which was mostly only played by an East Asian elderly audience which means the audience is expanded as well to other regions.

Language Model Enhanced Video Game

Bobby Becker, Luke Gulson, Ben Russell

As generative AI language models have become capable of creating creative content, we now face an important technical and ethical question: how do we use these innovations to enhance the creative process while keeping human creativity relevant? Our language model enhanced video game is our approach to this dilemma. In Unity, we design a demo for a third person fantasy RPG where players can engage with NPCs and test their skills in turn-based combat encounters. We use OpenAI’s API for GPT to generate NPC responses and flavor text to add replayability to our game, making it so text descriptions are original in every playthrough. By giving all NPCs access to a lore file written by us, we ensure that all AI-generated dialogue is engaging, consistent, and synergistic with our overall narrative.

Evaluating the Trustworthiness of Retrieval Augmented Generation Models for Civic Engagement and Transparency

Hayden Outlaw, Mikey Sison, Catherine Brooks

In collaboration with New Orleans community group Eye on Surveillance, we seek to evaluate the reliability and overall quality of Sawt, an AI tool that allows users to investigate New Orleans city documents and resources with a natural language interface. Sawt utilizes Retrieval Augmented Generation (RAG) to generate interpretable responses to query inputs, allowing local nonprofits, activist groups, and New Orleans residents to efficiently navigate local council records. Our efforts include an expanded front-end to collect user evaluations and an automated validation framework for language model development and deployment implemented into the back-end. With these augmentations, we investigate the trustworthiness of the model, as well as the suitability of external automated evaluation metrics for context retrieval augmented products, and how they compare to manual evaluation results. Our research and additions to Sawt improve the tool’s development workflow and transparency, ensuring that it can best be shaped into a tool that prioritizes local activist and civic priorities.

HomeRun App

Amara Midouhas, Alec Rovner, Rosey Sarnataro, Rafael Djamous

Our project focuses on creating a comprehensive mobile application that simplifies the off-campus housing search and lease management for students and landlords. Our app integrates various rental aspects, such as property listings filtered via school, secure messaging, maintenance requests, and secure payments, into a user-friendly platform. The app aims to enhance efficiency, reduce paperwork, and improve communication between tenants and landlords.

Determining if Spiking Neural Networks have a Greater Capacity to Resist Perturbations to Sensory Data in Image Classification Tasks

Jason Min, Caitlin Chen, Max Curl

The current approach to image classification is to use a deep neural network (DNN) as they are accurate and robust to perturbations. However, these models do not accurately reflect the types of neurons seen in the human brain. Spiking Neural Networks (SNN) are a class of AI architectures that utilize neurons more commonly seen in the brain, such as leak channels. The Basal Ganglia is a circuit within the human brain comprised of these leak channels and allows the human brain to make its decisions based on reward values. Nengo is a python package that allows for the generation of populations of neurons that can then form an SNN like model of the Basal Ganglia or other circuits. By using this more neuroanatomy specific biological model, we aim to prove that SNNs are better than DNNs when it comes to classifying noisy images. To test this, we created our own algorithm to attack and find the models' weaknesses through untargeted perturbations to our street sign image dataset. By comparing the results from the SNN with the DNN, we hope to observe that the SNN has better accuracy, speed, and/or robustness.

The Fuzz: Fuzzing Vulnerabilities with American Fuzzy Lop Fuzzer

Keith Mitchell, Victoria Chan, Izzy Chow

Universally, code is incredibly susceptible to attacks, alterations, and exploitations. Fuzzing provides a means to test code for vulnerabilities by passing various mutated inputs into the source code and observing for exploitable behavior or crashes. Fuzzing is a relatively shallow methodology, in which blind, random mutations make it highly unlikely to reach certain code paths in the tested code, leaving some vulnerabilities firmly outside the reach of the technique. Our group aims to fully test the limits of the American Fuzzy Lop Fuzzer by testing it on previously found and unknown vulnerabilities on open-source code and bug bounties, with the aim of writing an optimization plan and potentially discovering new vulnerabilities, ultimately contributing new findings to security databases and enhancing overall understanding of the AFL Fuzzer vulnerability discovery methodology.

Tulane Micromouse

Jason Li and Rehan Mullan

In the IEEE annual Micromouse Competition, groups of IEEE student engineers compete to see whose robot can navigate a maze the fastest. Teams will not have knowledge of the maze until the competition reveal, and teams are not allowed to program anything additional into the mouse after the maze is revealed. However, not all is random in the competition. The mouse will always start at the very bottom left corner of the maze and end at the very middle. Our goal this year is to try and implement a solution utilizing Reinforcement Learning to solve the maze, trying to improve the current algorithm used most commonly during the competition.

MultiVision Imager

Thomas Cutro, Joseph Wagner

When classifying skin disease, the primary method, even in today’s technological age, is biopsies. The issue with biopsies is that they necessitate a physician to analyze them, something that many places and people around the world do not have access to. Currently, the classification algorithms for skin diseases struggle with accuracy, especially in dark skin tones. To combat these issues, we are designing an iOS application that combines three imaging channels - RGB, thermal, and depth. Performing three different measurements with a handheld device would allow for increased accessibility to effective imaging analysis. Due to its versatility, this application could serve purposes in different fields, such as Civil Engineering and Building Science, where more than just a RGB image is necessary.

Virtual Reality Hurricane Preparedness and Response Training

Grace Livaudais, Rachel Roberts

Our project leverages Unity Game Engine to create a Virtual Reality (VR) training experience for essential hurricane preparedness and response skills, specifically targeting a younger audience that is typically harder to engage and unfamiliar with the topic. Users will navigate through experiences such as assembling a disaster kit, fortifying their homes for a hurricane, and other common scenarios, offering a novel and immersive approach to disaster preparedness education. Informed by conversations with Public Health Emergencies & Environmental Health Unit and resources from local organizations, the training is intended to serve as a tool for organizations to enhance disaster response capabilities, which is crucial in a high-risk city like New Orleans.

Hidden Activity Signal and Trajectory Anomaly Characterization (HAYSTAC)

Mackenzie Bookamer

The IARPA is funding the HAYSTAC project and has established the goals for this project. HAYSTAC aims to establish models of “normal” human movement across times, locations, and people in order to characterize what makes an activity detectable as anomalous within the expanding corpus of global human trajectory data. Success will establish the scientific foundation connecting data, movement, and the expectation of privacy. My specific goals are looking to establish statistical relationships between movement trajectories, namely through analyzing visualizations of individual trajectories.

CogniSync: Machine Learning & EEG-based Directional Controls

Justin Haysbert, Gabriel Sagrera, Shayne Shelton, Ryan Stevens

There are millions of paralysis patients in the United States. Electroencephalogram (EEG) headsets offer a way for these patients to operate mechanical devices provided motor imagery related brain waves. Machine learning approaches like SciKit-Learn (LDA, LRC, DTC, and RFC) and PyTorch can accurately and precisely categorize a user’s brain state into directional controls for this application. However, training time is long, training is user specific, and EEGs experience a low signal-to-noise ratio. This project aims to reduce user training time by (1) decreasing the number of movements users perform during training and (2) building a comprehensive graphical user interface (GUI) to streamline the recording, modeling, and prediction processes.

BlightWatch NOLA

Eugene Lim and JT McDermott

Blighted properties are buildings and/or empty lots that are properties in states of disrepair, abandonment, and neglect, which can create significant challenges for a city’s well-being and safety including negative economic effects like decreased values of properties surrounding blight, health hazards from toxins and pests, and increased criminal activity such as drug use and vandalism. Our project, BlightWatch NOLA, aims at providing a public facing dashboard which allows users to explore the location and status of blighted properties in the city. Using cloud-based data warehousing, we have unified several separate, publicly available blight-related data sets and have made them accessible in an interactive web application that automatically updates itself based on changes to the outside sources of data. This enhanced data visibility will help city government officials, local community organizations, businesses, and the general public in understanding the current state of the problem and assist in decision making.

User-Friendly Interface for Single-Cell RNA Sequencing Analysis

Jiayi Xu, Wendy Yang, Kyra Zhu

Abstract: Single-cell RNA sequencing (scRNA-seq) is a powerful tool in biological research, used in disease investigation and drug discovery. However, challenges exist in data analysis, limiting its accessibility due to the cost and coding requirements of existing services. Our project addresses this by developing a user-friendly scRNA-seq analysis interface, programmed in R and R Shiny. This interface aims to simplify data analysis and reduce barriers, making it more accessible to researchers with different backgrounds. Users can easily process their datasets, customize analyses, and generate comprehensive reports to gain insights into their data to facilitate their research.

Rexchange – Machine Learning and the New Orleans Real Estate

Lauren Janko, James Menzer, and Luke Albright

Tulane seniors Lauren Janko, James Manzer, and Luke Albright have partnered with Rexchange, a New Orleans-focused real estate application, to introduce a new process for home valuations. Rexchange plans to improve the accuracy of home listing prices by providing its own estimation called the “Rextimate,” which will be influenced by users who suggest their own opinions concerning the home’s value. The team from Tulane is developing a methodology to predict future listing prices through machine learning models that analyze several data features and historical home prices. All of this will be used to curb the common problem of inaccurate home valuations suffered by both buyers and sellers in New Orleans.

Enhancing Trust in AI: Deepfake Detection and Interpretation through Advanced Machine Learning Models

Mauryan Uppalapati

In response to the escalating threat of deepfakes in digital media, which undermines information integrity and societal trust, this project investigates and develops deepfake detection models. Starting with an exploration of deepfake generation techniques, the project then shifts focus towards the creation of two machine learning models, specifically resnetv2 and vgg16, each enhanced with specific layers for improved detection efficacy. These models were trained on a comprehensive dataset of 140,000 images, comprising both real and AI-generated content, to ensure accuracy and robustness. A significant implementation of this project is the incorporation of Gradient-weighted Class Activation Mapping (Grad-CAM), which provides interpretive heatmaps indicating the decisive features used by the models for classification. This approach introduces a layer of transparency and interpretability, aiming to build trust between humans and AI by elucidating the AI's decision-making process.

Predicting Stock Volatility with High-Frequency Market Data

This research details an approach to the former Kaggle competition (https://www.kaggle.com/competitions/optiver-realized-volatility-predict…) posted by Optiver (https://optiver.com/), a leading electronic market maker. Implied volatility, or the projected magnitude of a stock price’s fluctuations, is a crucial input into the Black-Scholes option pricing model. In this capstone, we explore the relationship between market interest, order execution, and short-term changes in volatility throughout millions of rows of HFT data with the ultimate goal of predicting where a stock’s volatility will realize in the 10 minutes following a market order. We measure our effectiveness using Root Mean Square Percent Error (RMSPE) in coordination with the competition’s evaluation standards.

Flower Power

Owen Harris, Tim Keegan, Axel Nielsen, Thalia Koutsougeras

This project's ENGP capstone counterpart explores the building integration of a hybrid Concentrated Photovoltaic-Thermal (CPV/T) system for cogeneration of electricity and process heat. The CS side aims to improve control over equipment and make data collection easier and more user-friendly. The final product showcases a website that connects to a Raspberry Pi on the roof of Flower Hall, which is subsequently connected to equipment to implement features such as displaying temperatures and other various data, starting/stopping the tracking system, and taking cell power measurements.

The Madness of March

Jim Haines, Josh McCoy

We explore the use of multiple data sources and machine learning models in order to predict the outcome of games in the well-known NCAA basketball March Madness Tournament for both men and women. We joined a Kaggle competition that provides a baseline dataset which includes past scores and statistics as well as an evaluation platform which allows thousands of groups from around the world to submit and score their models against each other. In order to stand out among so many participants, we explored the use of outside data sources including high school recruiting data and advanced analytics in our models. For our main model we use XGBoost, which uses random forest models, trained on both the Kaggle data and our external data. The Kaggle competition is based on the Brier Score which, for a set of N predictions, measures the mean squared difference between the predicted outcome and actual outcome of an event. Our submission consists of 1,000 simulated tournaments where we select a predicted winner in each round. To score models, Kaggle uses the implied probability a team will win each round based on the submission portfolio, and then scores these predictions against the actual outcome using the Brier Score.

College Compass

Devin Gutierrez, Tanner Martz

In an era where technology and education increasingly intersect, College Compass emerges as an innovative solution to the fragmented nature of traditional degree audits and course catalogs. Advised by Dr. Hassan, this web platform aggregates essential information into a singular, intuitive interface for course planning, skill tracking, and career exploration. By curating data from diverse academic resources, College Compass offers a visually engaging and interactive experience, simplifying the academic planning process while empowering students to align their education with career aspirations and skill development goals.

Moiré: Long Range Sensor Network for Reforestation Monitoring

Alex Motyka, Bennett Hermanoff, Charley Waldrop, Luke Farnan, Maddie Wisinski

We present Moiré: a hardware and accompanying software system for monitoring abiotic conditions across large distances of land. Moiré’s custom, solar-powered sensors monitor soil moisture, temperature, humidity, and light level. The sensors form a mesh network over the LoRa long range communication protocol and send sensor readings throughout the day. Accompanying software monitors and visualizes the data received from the sensor nodes on a custom, fullstack web application. This technology will be first deployed by FCAT, an Ecuadorian NGO focused on biodiversity, with the purpose of identifying the most ideal factors for reforestation.

Computational Epitope Prediction

Marco Carbullido, Maddie Bonanno

CD4+ T-cells identify and recognize specific parts of protein antigens called epitopes, crucial to the initiation of adaptive immune responses. Advances in deep learning approaches to protein structure prediction have given scientists the ability to predict the structure of nearly all known proteins with Google DeepMind’s AlphaFold2. However, in addition to structural information, experimentally determined structures of proteins are frequently reported and analyzed alongside thermal factors that provide information about protein flexibility and mobility known as b-factors, which AlphaFold2 does not produce. Mettu et. al. has shown the structural information (including b-factors) of an antigen improves the prediction of epitopes through the computation of Antigen Processing Likelihood. We explored computational methods to predict protein b-factors and applied them to AlphaFold2 predictions to compute Antigen Processing Likelihood, allowing epitope prediction purely from gene sequences. Our most successful implementation came from reproducing the LSTM model proposed in 2023 by Pandey et. al. by training on publicly available experimental data, which we applied to the structures predicted by AlphaFold2. Using this method, there was no significant change in the positive predictive value (PPV) of computed APL for the prediction of epitopes when measured across a set of test antigens.

PREDICTING BOND AMOUNTS IN ORLEANS MAGISTRATE COURT

Will Rodman and Bharat Solanky

Our project explores the potential of machine learning to predict bail amounts within Orleans parish court databases and understand the factors influencing these decisions in the context of the criminal legal system's transparency and equity. Working in collaboration with Court Watch NOLA and guided by Dr. Aron Culotta, we utilize over 800,000 court records from the Orleans Parish District Criminal Court, focusing on 9,800 cases between 2020-2022 to build a model capable of estimating initial bond values. Inputs for our model include charge type, magistrate, and defendant demographics, among others, clustered into 40 categories for analysis. Preliminary results, achieved through comparing several regression models, show that Support Vector Regression outperforms others with a significantly lower mean percentage error, suggesting machine learning's efficacy in estimating bond amounts accurately. Further analysis of the model's features indicated a negative correlation between bond amounts and the defendant's age, answering critical research questions and highlighting areas for future exploration on bias and fairness within bail determinations.