About Me

I am a Research Scientist at Meta Reality Labs. My work focuses on developing advanced on-device solutions to enhance the perception stack for Meta’s MR/VR product lines. Additionally, I conduct fundamental research on 3D generative AI.

Prior to joining Meta Reality Labs, I was a technical lead and senior machine learning/computer vision engineer with the Video Engineering Group at Apple Inc. During my tenure at Apple, I have lead the algorithm development and delivered the shipments of multiple groundbreaking products, includes Room Tracking on VisionPro, RoomPlan Enhancement and RoomPlan. Additionally, I collaborated with Apple AIML on 3D Scene Style Generation, where we pioneered RoomDreamer, the first paper to enable text-driven 3D indoor scene synthesis with coherent geometry and texture.

I received my Ph.D. and M.S. degree from University of Maryland, College Park, where I was advised by Prof. Rama Chellappa. I completed my B.S. degree in Electrical Engineering and Information Science from University of Science and Technology of China. Additionally, I completed internships at Snap Research and the Palo Alto Research Center.

Product Highlights

  • Jun, 2024. Room Tracking on VisionPro is unveiled at Apple WWDC 2024. This technology identifies room boundaries, supports precisely-aligned geometries, and recognizes transitions between rooms.
  • Jun, 2023. RoomPlan Enhancement is introduced at Apple WWDC 2023. It added numerous powerful features to RoomPlan, including multi-room scanning, multi-room layout, object attributes, polygon walls, improved furniture representation, room-type identification, and floor-shape recognition.
  • Jun, 2022. RoomPlan is first released at Apple WWDC 2022. Combining the power of Apple LiDAR, state-of-the-art 3D machine learning, and an intuitive scanning UI, RoomPlan empowers developers to create innovative solutions in interior design, architecture, real estate, and e-commerce.

Research Highlights