Virginia

Amazon-Virginia Tech initiative awards two Amazon Fellows, support for four faculty projects

Published

on


Faculty awards

Additionally, four faculty members received funding through the initiative for their projects. 

Muhammad Ali Gulzar, assistant professor in the Department of Computer Science, received funding for “Foundations on the Code Comprehensibility of Large Language Models.” LLMs have demonstrated strong performance in code generation. With the rise of agentic LLMs, their use is rapidly expanding into post development tasks requiring a deeper semantic understanding of code that is not strictly rooted in lexical and syntactic code features. While popular LLM benchmarks measure the accuracy of LLMs’ code generation, the extent to which LLMs truly understand code remains largely unevaluated. This project seeks to design a scalable, quantitative, and automated method for assessing how well an LLM understands code and the impact of this understanding on post-development tasks. The goal is to encourage more mindful use in coding tasks and, in the long run, provide an actionable basis for prioritizing training data for LLM fine-tuning.

Ming Jin, assistant professor in the Bradley Department of Electrical and Computer Engineering, received funding for “Enhancing Foundation Model Reasoning through Reinforcement Learning with Novel Reward Design.” Current efforts to enhance foundation model reasoning face limitations like high compute costs; reward hacking and stability issues with learned reward models; difficulty balancing reasoning quality and efficiency; and challenges in multimodal contexts. Improving complex reasoning of foundation models reliably and efficiently is critical for Amazon’s AI ecosystem. Producing both critiques and actionable hints for a richer signal has shown promise for improving optimization efficiency and effectiveness in previous research. This proposal builds on this foundation by designing novel reward signals that guide a model’s reasoning process, transforming it into a more autonomous agent capable of tackling complex, multi-step problems. 

Chang-Tien Lu, professor in the Department of Computer Science and associate director of the Sanghani Center, received funding for “Privacy-Preserving Collaborative Reasoning in Multi-Agent Systems.” Multi-agent systems enhance performance by combining a weaker but locally accessible model with a more powerful yet proprietary black-box remote model. This combination exposes local data to a remote agent, raising concerns about information leakage, especially in sensitive domains like healthcare information, financial records, and e-commerce activities. For virtual assistants like Amazon Alexa and smart home systems, which frequently process sensitive user data, robust local data protection is also crucial for preserving user privacy and trust. The goal of this research is to design a collaborative reasoning mechanism without exposing sensitive local data to thoroughly protect it before the black-box model inference. 

Advertisement

Tu Vu, assistant professor in the Department of Computer Science, received funding for “Efficient Model Development through Fine-tuning Transfer.” Large Language Models are continually evolving, with newer versions released to improve pretraining quality, architecture, or alignment. Yet each new version of the base model typically demands repeated and computationally expensive alignment procedures. This inefficiency extends to domain- or language-specific models, where fine-tuning must be redone from scratch with every base model upgrade. Transferring fine-tuning updates (i.e., weight differences or “diff vectors”) across model versions offers a compelling alternative: enabling model updates without full retraining. This proposed approach promises to significantly reduce training costs while maintaining competitive performance, making it a viable strategy for sustainable LLM development.

About the workshop

The invitation-only AI workshop was held in Ocotber at Academic Building One in Alexandria and included remarks by Lance Collins, vice president of the greater Washington, D.C., area; Ramakrishnan; and Anand Rathi, center liaison and director, software development, artificial general intelligence, at Amazon. 

“We are pleased to welcome our Amazon collaborators to Virginia Tech’s new Academic Building One in Alexandria for our annual gathering,” Ramakrishnan said. “It is a great opportunity to connect Virginia Tech faculty in the space of AI with Amazon researchers and foster future collaborations.”  

“Our collaboration with Virginia Tech represents a strategic investment in developing the next generation of AI talent and innovation,” said Rathi. “The research emerging from this partnership continues to advance our understanding of responsible and efficient AI systems while preparing students for the complex challenges of tomorrow.”

Additionally, Chalapathi Choppa, senior manager, security engineer, Amazon, discussed Amazon Artificial General Intelligence and the importance of responsible AI, and two Virginia Tech faculty members who have sponsored research projects with Amazon gave lightning talks. They were: 

Advertisement
  • Ruoxi Jia, assistant professor, Bradley Department of Electrical and Computer Engineering, Virginia Tech, “A Compositional Framework for Proactive AI Safety”
  • Hongliang Xin, assistant professor, Department of Chemical Engineering, “Next-Generation Catalysts for Fischer–Tropsch Synthesis”

Previous events related to the initiative have been held at the Virginia Tech Research Center — Arlington and on the university’s Blacksburg campus.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Exit mobile version