About me

Hi, I’m Chimaobi.
I’m a second-year PhD student in Computer Science and Engineering at the University of Michigan, co-advised by Professors Joyce Chai and Rada Mihalcea. Before Michigan, I earned my Bachelor’s degree in Electrical and Electronics Engineering from FUTO, Nigeria, where I specialized in Electronics Engineering (ECE) and graduated as the best student in my class.

My research interests lie in Natural Language Processing, with a focus on alignment, personalization, LLM safety, robustness, and reasoning.

Research Theme

I seek to make AI models more useful for YOU and all users. As such, I am mostly interested in AI Alignment: per-user alignment(personalization), alignment to user groups (e.g cultures) and general alignment to human core values AKA the 3Hs (Helpfulness, Harmlessness, and Honesty). My research is in two folds

Life-long personalization: As users continuously interact with LLMs through platforms such as ChatGPT, Gemini, and others, these interactions naturally create a rich and evolving context of user information. This information can be explicit (e.g., stated preferences), implicit, or latent. My research aims to develop efficient methods that steers model behaviour appropriately towards user needs/expectations/goals leveraging this vast information.

A key challenge however is that user goals are dynamic. Therefore, a central focus of my work is on seeking better grounding approaches to adapt models to evolving user goals, ensuring realiable personalization over the long term.

Robust Personalization: In optimizing for personalization, how do we ensure robustness - such that aligning user preferences does not affect factuilty of models nor raise saftey concerns. I formalize the notion of robustness in the context of personalization and highlight critical issues with current evaluation approaches that solely focus on alignment [1]. By rethinking how we evaluate and design personalized AI systems, I seek to build methods that preserve truthfulness and prevent harmful failure modes, while still adapting meaningfully to diverse user goals.

TL;DR: I study robust life-long personalization of AI agents – seeking ways to better adapt LLMs to user features (implicit, explicit, latent) in a dynamic fashion without compromising saftey and factuality.

Recent News

  • Aug 2025: My first paper, Benchmarking and Improving LLM Robustness for Personalized Generation, just got accepted to EMNLP 2025. See you in Suzhou :yum:
  • Aug 2024: Joined the SLED and LIT labs @UofM CSE as a PhD student. #GoBlue :sunglasses: