NATS-Bench: Benchmarking NAS algorithms for Architecture Topology and Size

Xuanyi Dong, Lu Liu, Katarzyna Musial, Bogdan Gabrys

Short Introduction

Neural architecture search (NAS) has attracted a lot of attention and has been illustrated to bring tangible benefits in a large number of applications in the past few years. Network topology and network size have been regarded as two of the most important aspects for the performance of deep learning models and the community has spawned lots of searching algorithms for both aspects of the neural architectures. However, the performance gain from these searching algorithms is achieved under different search spaces and training setups. This makes the overall performance of the algorithms to some extent incomparable and the improvement from a sub-module of the searching model unclear. In this paper, we propose NATS-Bench, a unified benchmark on searching for both topology and size, for (almost) any up-to-date NAS algorithm. NATS-Bench includes the search space of 15,625 neural cell candidates for architecture topology and 32,768 for architecture size on three datasets. We analyse the validity of our benchmark in terms of various criteria and performance comparison of all candidates in the search space. We also show the versatility of NATS-Bench by benchmarking 13 recent state-of-the-art NAS algorithms on it. All logs and diagnostic information trained using the same setup for each candidate are provided. This facilitates a much larger community of researchers to focus on developing better NAS algorithms in a more comparable and computationally cost friendly environment.

Comparision With Other Benchmarks

#Unique DNNs #Datasets Diagnostic Information Search Space Supported NAS Algorithms
NAS-Bench-101 423K 1 X topology all multi-trial methods
St in NATS-Bench 6.5K 3 fine-grained information topology all multi-trial methods and most one-shot methods
Ss in NATS-Bench 32.8K 3 fine-grained information size all multi-trial methods and most one-shot methods

The architecture size indicates the number of channels in each layer in our manuscript.

The Architecture Space

Figure 1. Middle: the macro skeleton of each architecture candidate. Top: The size search space, where each candidate architecture has different configuration for the channel size. Bottom: The topology search space, where each candidate architecture has different cell topology.

Codes

All codes for reproducing the benchmarking experiments and querying the information can be found at https://github.com/D-X-Y/AutoDL-Projects. Here are some examples to use our NATS:
from nats_bench import create

# Create the API for size search space
api = create(None, 'sss', fast_mode=True, verbose=True)

# Create the API for tologoy search space
api = create(None, 'tss', fast_mode=True, verbose=True)

# Query the loss / accuracy / time for 1234-th candidate architecture on CIFAR-10
# info is a dict, where you can easily figure out the meaning by key
info = api.get_more_info(1234, 'cifar10')

# Query the flops, params, latency. info is a dict.
info = api.get_cost_info(12, 'cifar10')

# Simulate the training of the 1224-th candidate:
validation_accuracy, latency, time_cost, current_total_time_cost = api.simulate_train_eval(1224, dataset='cifar10', hp='12')

# Clear the parameters of the 12-th candidate.
api.clear_params(12)

# Reload all information of the 12-th candidate.
api.reload(index=12)

# Create the instance of th 12-th candidate for CIFAR-10.
from models import get_cell_based_tiny_net
config = api.get_net_config(12, 'cifar10')
network = get_cell_based_tiny_net(config)

# Load the pre-trained weights: params is a dict, where the key is the seed and value is the weights.
params = api.get_net_param(12, 'cifar10', None)
network.load_state_dict(next(iter(params.values())))


Citation


@article{dong2020nats,
  title={NATS-Bench: Benchmarking NAS algorithms for Architecture Topology and Size},
  author={Dong, Xuanyi and Liu, Lu and Musial, Katarzyna and Gabrys, Bogdan},
  journal={arXiv preprint arXiv:2009.00437},
  year={2020}
}