Khoury News
From programming to policy, Khoury undergrads research tough tech questions
Khoury College's newest, youngest researchers are making their voices heard, turning their passions and projects into full-blown investigations of the technology that dominates our world.
Many Khoury undergraduates are doing important, hard-hitting research. But five undergraduates — Arinjay Singh, Elaine Zhu, Luisa Li, Misha Ankudovych, and Zack Eisbach — have displayed an especially high level of ingenuity and dedication to solving some of the biggest challenges in modern technology. All were nominated by their professors or research advisors for the Computing Research Association’s Outstanding Undergraduate Researcher Award, with Singh and Li being awarded honorable mention and Eisbach placing as a finalist. All five graduated from Khoury College this spring.
From proposing AI regulation to evaluating whether delivery apps are profiling users, these students used the resources around them to tackle leading questions in their fields.
Arinjay Singh: Testing the emotional intelligence of AI

Arinjay Singh, a computer science and economics combined major, was writing about AI before ChatGPT became the phenomenon it is now. Singh’s inherent interest in understanding AI has been a driving force of his work at Northeastern, core to several of his research projects and leading him to add a minor in information ethics.
“Both projects are related in that they have to do with the ethics of AI. All of my research is centered in the field of natural language processing, which is a subsection of AI that I’m particularly interested in,” Singh said.
The first project evaluated patterns of data bias in cross-linguistic AI settings, ultimately creating a comprehensive framework on how to approach bias assessment and equitable data construction. Looking at natural language processes involving English and Arabic spoken languages, Singh evaluated AI training data to create a framework that would identify biases in the data before the model used it.
“The project wasn’t necessarily creating a new system. It was more so creating a framework to evaluate existing systems,” Singh said. “Going forward, when someone writes a paper, they can at least reflect on how what they’re putting out in the world is going to perpetuate in an AI model. Because if there are biases, oftentimes they’re not enforced by the people who are affected, so they wouldn’t naturally recognize it.”
Singh’s second project, which he’s currently working on, explores the interpretable emotional intelligence in language learning models. Singh studied how AI models represented human emotions, how the models designate each emotion, and how the models could be improved at these tasks. The idea was inspired by his observations on the way language learning models are being used in high-stakes situations despite having little to no understanding of human emotional processes.
“A more recent push for me [is understanding] how ingrained technology has become with all of our lives, and AI being involved in quite sensitive domains,” Singh said. “People use it for therapy or in mental health spaces; it’s just in a very vulnerable place in society. So, we should at least understand how AI understands emotion, how to make it better, and whether it’s beneficial to the people who use it.”
Singh is not sure what he wants to do after he graduates, whether in industry or academia. However, his interest in the ethical implications of widespread GenAI adoption and discovering more about the technology’s internal process will continue to fuel his work.
Elaine Zhu: Profiling users through data collection

Elaine Zhu, a cybersecurity and criminal justice combined major, couldn’t help but notice the large, digital ecosystems of the people around her, worlds that included numerous phones, tablets, laptops, and watches they owned and interacted with frequently.
Zhu’s published paper, which she presented at PETS 2025 in Washington, DC, explored how voice assistants like Alexa, Siri, and Google Assistant profile users based on their voice interactions. She discovered the different methods these assistants use — whether that be tracking searches linked to demographics or purchase history — and how they tag users with categories like income, marital status and employment.
“If I have a human as a personal assistant, I don’t want that person to be tracking what I do and creating a profile on me,” Zhu said. “These kinds of research allow for a greater discourse on the implications of using all of these devices that are connected to the internet.”
This project began with Tina Khezresmaeilzadeh, a PhD student at the University of Southern California, before Zhu brought it to Northeastern and continued to work on it.
Her current research focuses on a “surveillance pricing” project that started about 10 years ago under Northeastern professors Christo Wilson and Alan Mislove, who studied changes in airline pricing based on personal data gathered by the airline. Zhu is taking this idea and applying it to delivery apps like DoorDash and GrubHub, with the goal of measuring whether these apps change their prices based on collected data.
A major source of inspiration for Zhu’s work is a rallying cry for greater transparency from companies on how they use customers’ data, including sharing these practices with customers.
“These specific data collection uses weren’t disclosed in Google’s terms of service,” Zhu said, citing the example of Google Assistant. “With this research … it’s about informing the public that these things are happening. We want more companies to be more transparent about their practices.”
Luisa Li: Turning an idea into a solution

When Luisa Li, a computer science and mathematics combined major, began working more with scientists, she noticed the detailed methods they were forced to use for questioning AI models, inspiring her to create more transparent, interpretable models.
“Say you’re talking to ChatGPT via some prompting. There’s not much you can do to understand what is happening under the hood; there are no restrictions on what the model can be after you train it,” Li said. “My main line of work is understanding what sort of rules or restrictions you can place on the model to make it a little bit better. And it turns out, there’s some very nice math that you can do to get these sort of restrictions on models.”
One ongoing line of work is studying dynamics over a surface — for example, what does heat diffusion look like over a bunny-shaped surface? Li studies the behavior of models when simulating all kinds of dynamics, from processes like heat diffusion, which are considered “smooth,” and processes like the separation of oil and water, which are considered “wrinkly.” Li and her co-author worked out math that, in the heat diffusion case, allowed them to more precisely control the model to simulate the same rate of diffusion as the actual physical process.
“Diffusive processes happen at different speeds, and the question that we want people to think about is, ‘To what extent is the model that you’re using accurately reflecting this behavior?’” Li said.
Li’s paper is undergoing peer review, and she hopes that after it publishes, she can continue to make AI models more transparent and reliable through PhD work.
Misha Ankudovych: The road toward AI regulation

After researching with more traditional STEM methods — building software and analyzing data — Misha Ankudovych, a data science and economics combined major, chose a different approach for his work on AI regulation. After taking “Regulation in the Digital Platform Economy” with Khoury–School of Law Assistant Professor Elettra Bietti, Ankudovych began to view AI through a new lens: working legislation.
“When it came time to do one culminating piece for that class, I realized a lot of what we talked about, like privacy, applied to AI,” Ankudovych said. “I’m a data science student, so in my other classes, I have built AI; I’ve built a large language model. I thought it would be interesting to engage with this in a very different way.”
This different way manifested as an almost-50-page piece of AI legislation, one which adopted ideas from other works and added Ankudovych’s own touches. He also highlighted the potential downsides and weaknesses of the previously proposed legislation and adapted the work to a United States context, expanding on potential US stakeholders that might oppose his proposed methods.
Ankudovych is optimistic about the future of AI but acknowledges that this positive scenario only occurs if there’s strong research done on regulation.
“AI is not only rapidly advancing, but it’s gotten to the point where people don’t even know what AI is. What is an AI? In some senses, it has become a scapegoat,” Ankudovych said. “But I think people are rightfully scared. There’s a very fine balance of the idea of regulation and the idea of also wanting to advance.”
Ankudovych’s proposals include assigning clear-cut responsibility between developers and deployers, transparency requirements for AI platforms, and penalties for violations. Most importantly, he advocates for funds dedicated to teaching the general public how AI works instead of simply how to use it.
Post-graduation, Ankudovych is moving to Washington, DC, to work as a forward-deployed software engineer, helping government officials and their partners create AI and data-driven software while keeping in mind his central message on regulating models well.
“I think we want to make sure that as a country, we are enabling and helping technologists make the best product and the best research,” Ankudovych said. “Before we lose the plot on that, it’s important to think about regulation and the ways in which we can think about these things now so we’re not looking back in five years saying, ‘We’re so far gone; we don’t know where to start.’”
Zack Eisbach: The innovation to solve decades-old problems

Zack Eisbach, a mathematics and computer science combined major, never expected that his research into application binary interfaces (ABI) would lead to an ongoing project with Apple’s Swift team. But that just speaks to the nature of his work.
ABIs are low-level specifications that identify how different program components are allowed to interact with each other. They are crucial for linking two programs written in different languages, a process known as language interoperability. Eisbach’s research, which he began in his second year at Northeastern, aimed to transform the hundreds of pages of prose dictating each ABI into a machine-understandable math, eliminating errors and security vulnerabilities when the languages are combined.
“A lot of the things that I’ve been focusing on in my research are ways to make language interoperability safer,” Eisbach said. “There are all of these nice programming languages with new type systems that rule out bugs by construction. But in practice, we have to interact with these unsafe languages that have been around since the 70s; we can’t just go and rewrite all those programs. So, some of my research is partially concerned about how we can make unsafe languages safer for the purposes of interoperability.”
Alongside Khoury PhD student Andrew Wagner and Professor Amal Ahmed, Eisbach wrote a paper chronicling their research and findings, using Apple’s Swift programming language as a case study. Although the Swift team has never had research interns before, they worked with Eisbach after he reached out to them, speaking with him about his work and later offering him an internship to continue developing this idea.
“Once I got to Apple, I worked directly with a lot of the language features,” Eisbach said. “There’s only so much that you can uncover yourself digging through source code and examples; being able to ask the person who implemented the feature how it worked was extremely valuable.”
After graduation, Eisbach will return to Jane Street, his former co-op employer, as a full-time compiler engineer. He credits a lot of his inspiration and dedication to research from a deep love for all things data, logic, and math.
“That’s what really got me into programming language research; it’s applicable to the real world,” Eisbach said. “It touches on computer science and programs, but there’s also a very rich history of using math to study these languages. And it touches on a lot of topics from logic as well.”
The Khoury Network: Be in the know
Subscribe now to our monthly newsletter for the latest stories and achievements of our students and faculty