Hi there! I am Nhan Tran (sounds like “Nyun”).
I'm also pursuing a minor in Media Studies (focusing on Cinematography and Visual Storytelling) at Cornell's Performing and Media Arts.
Before graduate school, I had two wonderful years in the industry working on robotics perception and human-robot interaction at Robust AI (check out our robot here). Prior to that, I interned, learned, and collaborated with the amazing teams at Robust.AI, Facebook, Google Nest, and NASA/Caltech Jet Propulsion Laboratory.
My current research interests include extended reality (XR, which includes AR, VR, and MR).
CV/Resume Github G. Scholar LinkedIn Twitter/X YouTube
nhan at cs dot cornell dot edu
Nhan Tran, Trevor Grant, Thao Phung, Leanne Hirshfield, Christopher Wickens, Tom Williams
HCI International Conference on Virtual, Augmented, and Mixed Reality (HCII 2023)
We explored whether the success of Mixed Reality Deictic Gestures for human-robot communication depends on a user's cognitive load, through an experiment grounded in theories of cognitive resources. We found these gestures provide benefits regardless of cognitive load, but only when paired with complex language. Our results suggest designers can use rich referring expressions with these gestures without overloading users.
What's The Point? Tradeoffs Between Effectiveness and Social Perception When Using Mixed Reality to Enhance Gesturally Limited Robots
Jared Hamilton, Thao Phung, Nhan Tran, Tom Williams
ACM/IEEE International Conference on Human-Robot Interaction (HRI 2021)
We present the first experiment analyzing the effectiveness of robot-generated mixed reality gestures using real robotic and mixed reality hardware. Our findings demonstrate how these gestures increase user effectiveness by decreasing user response time during visual search tasks, and show that robots can safely pair longer, more natural referring expressions with mixed reality gestures without worrying about cognitively overloading their interlocutors.
HRI Pioneers Workshop at the International Conference on Human-Robot Interaction (HRI 2020)
★ HRI Pioneers ★ PDF
Tom Williams and Matthew Bussing and Sebastian Cabrol and Elizabeth Boyle and Nhan Tran
ACM/IEEE International Conference on Human-Robot Interaction (HRI 2019)
We investigate human perception of videos simulating the display of allocentric gestures, in which robots circle their targets in users' fields of view. Our results suggest that this is an effective communication strategy, both in terms of objective accuracy and subjective perception, especially when paired with complex natural language references.
Tom Williams, Nhan Tran, Josh Rands, Neil T Dantam
HCI International Conference on Virtual, Augmented, and Mixed Reality (2018)
Humans use deictic gestures like pointing when interacting to help identify targets of interest. Research shows similar robot gestures enable effective human-robot interaction. We present a conceptual framework for mixed-reality deictic gestures and summarize our work using these techniques to advance robot-generated deixis state-of-the-art
Films & Videos
I have created some videos over the years as a personal hobby and creative outlet. Now that I am pursuing a Ph.D. in Computer Science with a minor in Media Studies (focusing on Cinematography), I am looking forward to having more opportunities to tell visual stories during breaks from research and teaching responsibilities.
Stay tuned for more or subscribe to my YouTube channel!
I led this project with a team of undergraduates to transform a medical crash cart used in hospitals into a smart robotic system as part of the Mobile Human-Robot Interaction class taught by Prof. Wendy Ju at Cornell Tech. The base is built on a modified hoverboard. On the perception side, we use the RealSense depth sensor to prototype the "follow me" interaction robot that carries medical supplies and follows designated user.
Wall Z 1.0
My friend Ryan and I built the Wall-Z robot, inspired by Disney's Wall-E, which uses on-edge processing with an Nvidia Jetson for ASL recognition, VR for remote environment visualization, and synchronizes its head movement with a VR headset.
Mixed-Reality Assistant for Medication Navigation and TrackingCode
I built an embodied mixed reality assistant on the Microsoft HoloLens 1 that uses virtual interfaces to allow users to anchor where they placed their pill bottles, saves the locations in a map, and then when requested, projects an overlay of the shortest path from the user's current position to the saved anchor points.
3D-printed Mars RoverVideo
Team project with the Mines Robotics Club. We built a tiny Mars rover to compete in the Colorado Space Grant Robotics Challenge. The robot used several proximity sensors to avoid obstacles, drive toward a beacon, and withstand the Mars-like environment of the Great Sand Dunes National Park.
Blasterbotica: The Mining Bot at the NASA Robotic Mining CompetitionVideo
Built with the 2016 Colorado School of Mines’ Blasterbotica senior design team to compete in the NASA Robotic Mining Competition, this robot could traverse the arena, avoid obstacles, excavate regolith, and dump the collected regolith into the final collection bin. I was a member in the perception team learning to use ROS and OpenCV to detect obstacles and the collection bin.
Biped Robot v1.5 - A DIY Humanoid Walking RobotVideo
My friend Arthur and I built this biped robot over a weekend. It was designed to imitate human walking, detect obstacles, and be operated using hand gestures. This was after watching the debut of the Atlas robot at Boston Dynamics. Through DIY, we learned that bipedal locomotion is hard!
Hailfire, a hand gesture-controlled robot
Carpal tunnel arm coach
A robotic hand that coaches users through exercises to help prevent carpal tunnel syndrome
Sir Mixer: An emotionally aware bartender robotVideo
My roommate Patrick and I built an IoT drink mixer that is able to interpret the facial expressions of human users, infer their emotions, and then mix drinks accordingly.