Find Your Next Collaborator

Discover researchers with complementary expertise, aligned interests, and collaboration potential based on publication patterns.

Filter by:
92% Match
SL

Dr. Sarah Liu

Assistant Professor

Stanford University

127
Publications
8,432
Citations
34
h-index
Research Areas
TransformersEfficient MLQuantizationModel Compression
Why This Match

Dr. Liu's work on model quantization complements your focus on efficient architectures. Her recent paper on mixed-precision training could extend your transformer optimization research.

87% Match
MK

Prof. Michael Kim

Associate Professor

MIT CSAIL

89
Publications
12,891
Citations
41
h-index
Research Areas
NLPInterpretabilityProbingLanguage Models
Why This Match

Prof. Kim's interpretability research could help explain efficiency gains in your architectures. His probing methodology is widely cited and could validate your model improvements.

84% Match
EP

Dr. Elena Petrova

Research Scientist

Google DeepMind

45
Publications
5,672
Citations
28
h-index
Research Areas
Efficient InferenceSpeculative DecodingTransformersLLM Systems
Why This Match

Dr. Petrova leads DeepMind's efficient inference team. Her speculative decoding work aligns with your architectural efficiency goals and could lead to industry collaboration.

81% Match
JC

Dr. James Chen

Postdoctoral Researcher

UC Berkeley

23
Publications
1,234
Citations
14
h-index
Research Areas
Attention MechanismsLinear AttentionNLPLong Context
Why This Match

Dr. Chen's linear attention variants achieve similar efficiency goals through different methods. A joint paper comparing approaches could be highly impactful.