Barry (Xuanyi) Dong
Augment ; ex-Google DeepMind
Address: 395 Page Mill Rd, Palo Alto, CA
Email: xuanyi.dxy [at] gmail [dot] com

About Me ([GitHub] [Google Scholar] [Full Publications])

I am building useful AI tool for developers at Augment as a founding member. I was a research scientist at Google Deepmind and have developed a variety of techniques used in Bard (Gemini), Ads, and Cloud. Before that, I spend over five years doing research and engagement at Google, Facebook, Amazon, Microsoft, UTS, etc. In general, my research interests included large generative models and its application to build a better world.

I received a Ph.D. degree from the School of Computer Science, University of Technology Sydney (UTS) and B.E. degree from Beihang University.

Selected Industrial Experience

  • Jul 2023 - Present, Founding Member, Augment
  • Feb 2022 - Jul 2023, Research Scientist, Google DeepMind
  • Jan 2016 - Feb 2022, Over five years of scientist experience at Amazon Web Services (AWS) AI, Google, Meta, Microsoft, Alibaba, etc.

Selected Publications

DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, Adams Wei Yu
in NeurIPS 2023
[arXiv] [twitter] [code]
Efficient mixture tuning algorithm for LLM pretraining data domains.
Symbolic Discovery of Optimization Algorithms
Xiangning Chen, Chen Liang, Da Huang, Esteban Real, Kaiyuan Wang, Yao Liu, Hieu Pham, Xuanyi Dong, Thang Luong, Cho-Jui Hsieh, Yifeng Lu, Quoc V. Le
in NeurIPS 2023
[arXiv] [code]
SoTA performance on vision, LLM, vision-language benchmarks with automatically discovered Lion optimizer.
AutoHAS: Efficient Hyperparameter and Architecture Search
Xuanyi Dong, Mingxing Tan, Adams Wei Yu, Daiyi Peng, Bogdan Gabrys, Quoc V. Le
in NAS@ICLR, 2021
[arXiv] [Slides] [BibTex]
A weight sharing-based hyperparameter and architecture search approach, improving MobileNet/ResNet/EfficientNet/BERT.
NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size
Xuanyi Dong, Lu Liu, Katarzyna Musial, Bogdan Gabrys
in IEEE TPAMI, 2021
[arXiv] [IEEE] [API] [Package] [Project] [BibTex] [ICLR]
An algorithm-agnostic NAS benchmark with information of 15,625 neural cell candidates for architecture topology and 32,768 for architecture size on three datasets, and we also provide 13 NAS baselines in a single codebase.

Selected Invited Talks

  • May 2021: Extend the Search from Arch to HP, Hardware, System, Keynote Speaker at NAS@ICLR2021
  • Apr 2021: Extend the Search from Architecture to HP, Hardware, System, 4Paradigm, MSRA
  • Sep 2020: Efficient Differentiable Automated Deep Learning, UT Austin
  • Summer 2020: Towards Efficient and Reproducible NAS, Baidu USA, 1st VALSE Student Seminar
  • Summer 2019: Efficient NAS and Its Applications to Computer Vision, YITUTech, Baidu, SCUT, SUST

Awards and Honors

Academic Services

Executive Area Chair at Vision And Learning SEminar (VALSE) [SIX]

Senior Area Chair at AutoML Conference 2022 - 2024

Organizer
ICCV 2021 Workshop -- Neural Architectures: Past, Present and Future

Journal Reviewer
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPMAI)
International Journal of Computer Vision (IJCV)
IEEE Transactions on Image Processing (TIP)
IEEE Transactions on Circuits and Systems for Video Technology (TCSVT)
Pattern Recognition (PR) [Outstanding Reviewer]

Conference Reviewer or Senior Program Committee
Neural Information Processing Systems (NeurIPS) 2019 - 2022
International Conference on Machine Learning (ICML) 2021 - 2022
International Conference on Learning Representations (ICLR) 2021 - 2022
Computer Vision and Pattern Recognition (CVPR) 2019 - 2022
European Conference on Computer Vision (ECCV) 2020
International Conference on Computer Vision (ICCV) 2019, 2021
ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) 2021
AAAI Conference on Artificial Intelligence (AAAI) 2020 - 2022
International Joint Conference on Artificial Intelligence (IJCAI) 2020 - 2022