Amrit Adhikari
AI engineer and backend developer focused on building production-ready NLP and LLM systems. I work at the intersection of machine learning, scalable backend architecture, and responsible AI — with a strong focus on deploying reliable systems in real-world environments. My work spans LLM-powered applications, retrieval-augmented generation (RAG) pipelines, and cloud-native services, with attention to performance, monitoring, and safety.
End-to-end LLM solutions — chat assistants, document Q&A, and decision-support tools. Prompt engineering, structured outputs, tool calling, and system-level reliability.
RAG systems that retrieve accurately and generate grounded, low-hallucination answers. Chunking strategies, embedding pipelines, vector search, evaluation, and monitoring.
Microservices, secure APIs, distributed locking, caching, containerization, and cloud deployment. Making AI platforms scalable, secure, and cost-efficient.
Bias checks, prompt regression testing, monitoring for failure modes, and human-in-the-loop workflows. Ensuring AI features are reliable enough for high-impact use.
- Directed Study (Fall 2025): Multi-Modal Hail Prediction · Dr. Ting Xiao
- Summer Research Fellow (2025): Cross-Lingual Prompting Bias in Large Language Models for Human Rights Classification · Dr. Poli Nemkova & Dr. Mark V. Albert
- Software Engineering (Fall 2024): Presented on deploying microservices with Docker and Kubernetes on AWS · Dr. Mohsen Amini