Dr. Sarah Liu
Assistant Professor
Stanford University
Dr. Liu's work on model quantization complements your focus on efficient architectures. Her recent paper on mixed-precision training could extend your transformer optimization research.
Discover researchers with complementary expertise, aligned interests, and collaboration potential based on publication patterns.
Assistant Professor
Stanford University
Dr. Liu's work on model quantization complements your focus on efficient architectures. Her recent paper on mixed-precision training could extend your transformer optimization research.
Associate Professor
MIT CSAIL
Prof. Kim's interpretability research could help explain efficiency gains in your architectures. His probing methodology is widely cited and could validate your model improvements.
Research Scientist
Google DeepMind
Dr. Petrova leads DeepMind's efficient inference team. Her speculative decoding work aligns with your architectural efficiency goals and could lead to industry collaboration.
Postdoctoral Researcher
UC Berkeley
Dr. Chen's linear attention variants achieve similar efficiency goals through different methods. A joint paper comparing approaches could be highly impactful.