Maintained by Difan Deng and Marius Lindauer.
The following list considers papers related to neural architecture search. It is by no means complete. If you miss a paper on the list, please let us know.
Please note that although NAS methods steadily improve, the quality of empirical evaluations in this field are still lagging behind compared to other areas in machine learning, AI and optimization. We would therefore like to share some best practices for empirical evaluations of NAS methods, which we believe will facilitate sustained and measurable progress in the field. If you are interested in a teaser, please read our blog post or directly jump to our checklist.
Transformers have gained increasing popularity in different domains. For a comprehensive list of papers focusing on Neural Architecture Search for Transformer-Based spaces, the awesome-transformer-search repo is all you need.
5555
Zhu, Huijuan; Xia, Mengzhen; Wang, Liangmin; Xu, Zhicheng; Sheng, Victor S.
A Novel Knowledge Search Structure for Android Malware Detection Journal Article
In: IEEE Transactions on Services Computing, no. 01, pp. 1-14, 5555, ISSN: 1939-1374.
@article{10750332,
title = { A Novel Knowledge Search Structure for Android Malware Detection },
author = {Huijuan Zhu and Mengzhen Xia and Liangmin Wang and Zhicheng Xu and Victor S. Sheng},
url = {https://doi.ieeecomputersociety.org/10.1109/TSC.2024.3496333},
doi = {10.1109/TSC.2024.3496333},
issn = {1939-1374},
year = {5555},
date = {5555-11-01},
urldate = {5555-11-01},
journal = {IEEE Transactions on Services Computing},
number = {01},
pages = {1-14},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {While the Android platform is gaining explosive popularity, the number of malicious software (malware) is also increasing sharply. Thus, numerous malware detection schemes based on deep learning have been proposed. However, they are usually suffering from the cumbersome models with complex architectures and tremendous parameters. They usually require heavy computation power support, which seriously limit their deployment on actual application environments with limited resources (e.g., mobile edge devices). To surmount this challenge, we propose a novel Knowledge Distillation (KD) structure—Knowledge Search (KS). KS exploits Neural Architecture Search (NAS) to adaptively bridge the capability gap between teacher and student networks in KD by introducing a parallelized student-wise search approach. In addition, we carefully analyze the characteristics of malware and locate three cost-effective types of features closely related to malicious attacks, namely, Application Programming Interfaces (APIs), permissions and vulnerable components, to characterize Android Applications (Apps). Therefore, based on typical samples collected in recent years, we refine features while exploiting the natural relationship between them, and construct corresponding datasets. Massive experiments are conducted to investigate the effectiveness and sustainability of KS on these datasets. Our experimental results show that the proposed method yields an accuracy of 97.89% to detect Android malware, which performs better than state-of-the-art solutions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Feifei; Li, Mao; Ge, Jidong; Tang, Fenghui; Zhang, Sheng; Wu, Jie; Luo, Bin
Privacy-Preserving Federated Neural Architecture Search With Enhanced Robustness for Edge Computing Journal Article
In: IEEE Transactions on Mobile Computing, no. 01, pp. 1-18, 5555, ISSN: 1558-0660.
@article{10742476,
title = { Privacy-Preserving Federated Neural Architecture Search With Enhanced Robustness for Edge Computing },
author = {Feifei Zhang and Mao Li and Jidong Ge and Fenghui Tang and Sheng Zhang and Jie Wu and Bin Luo},
url = {https://doi.ieeecomputersociety.org/10.1109/TMC.2024.3490835},
doi = {10.1109/TMC.2024.3490835},
issn = {1558-0660},
year = {5555},
date = {5555-11-01},
urldate = {5555-11-01},
journal = {IEEE Transactions on Mobile Computing},
number = {01},
pages = {1-18},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {With the development of large-scale artificial intelligence services, edge devices are becoming essential providers of data and computing power. However, these edge devices are not immune to malicious attacks. Federated learning (FL), while protecting privacy of decentralized data through secure aggregation, struggles to trace adversaries and lacks optimization for heterogeneity. We discover that FL augmented with Differentiable Architecture Search (DARTS) can improve resilience against backdoor attacks while compatible with secure aggregation. Based on this, we propose a federated neural architecture search (NAS) framwork named SLNAS. The architecture of SLNAS is built on three pivotal components: a server-side search space generation method that employs an evolutionary algorithm with dual encodings, a federated NAS process based on DARTS, and client-side architecture tuning that utilizes Gumbel softmax combined with knowledge distillation. To validate robustness, we adapt a framework that includes backdoor attacks based on trigger optimization, data poisoning, and model poisoning, targeting both model weights and architecture parameters. Extensive experiments demonstrate that SLNAS not only effectively counters advanced backdoor attacks but also handles heterogeneity, outperforming defense baselines across a wide range of backdoor attack scenarios.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Yu-Ming; Hsieh, Jun-Wei; Lee, Chun-Chieh; Fan, Kuo-Chin
RATs-NAS: Redirection of Adjacent Trails on Graph Convolutional Networks for Predictor-based Neural Architecture Search Journal Article
In: IEEE Transactions on Artificial Intelligence, vol. 1, no. 01, pp. 1-11, 5555, ISSN: 2691-4581.
@article{10685480,
title = { RATs-NAS: Redirection of Adjacent Trails on Graph Convolutional Networks for Predictor-based Neural Architecture Search },
author = {Yu-Ming Zhang and Jun-Wei Hsieh and Chun-Chieh Lee and Kuo-Chin Fan},
url = {https://doi.ieeecomputersociety.org/10.1109/TAI.2024.3465433},
doi = {10.1109/TAI.2024.3465433},
issn = {2691-4581},
year = {5555},
date = {5555-09-01},
urldate = {5555-09-01},
journal = {IEEE Transactions on Artificial Intelligence},
volume = {1},
number = {01},
pages = {1-11},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Manually designed CNN architectures like VGG, ResNet, DenseNet, and MobileNet have achieved high performance across various tasks, but design them is time-consuming and costly. Neural Architecture Search (NAS) automates the discovery of effective CNN architectures, reducing the need for experts. However, evaluating candidate architectures requires significant GPU resources, leading to the use of predictor-based NAS, such as graph convolutional networks (GCN), which is the popular option to construct predictors. However, we discover that, even though the ability of GCN mimics the propagation of features of real architectures, the binary nature of the adjacency matrix limits its effectiveness. To address this, we propose Redirection of Adjacent Trails (RATs), which adaptively learns trail weights within the adjacency matrix. Our RATs-GCN outperform other predictors by dynamically adjusting trail weights after each graph convolution layer. Additionally, the proposed Divide Search Sampling (DSS) strategy, based on the observation of cell-based NAS that architectures with similar FLOPs perform similarly, enhances search efficiency. Our RATs-NAS, which combine RATs-GCN and DSS, shows significant improvements over other predictor-based NAS methods on NASBench-101, NASBench-201, and NASBench-301.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Chen, X.; Yang, C.
CIMNet: Joint Search for Neural Network and Computing-in-Memory Architecture Journal Article
In: IEEE Micro, no. 01, pp. 1-12, 5555, ISSN: 1937-4143.
@article{10551739,
title = {CIMNet: Joint Search for Neural Network and Computing-in-Memory Architecture},
author = {X. Chen and C. Yang},
url = {https://www.computer.org/csdl/magazine/mi/5555/01/10551739/1XyKBmSlmPm},
doi = {10.1109/MM.2024.3409068},
issn = {1937-4143},
year = {5555},
date = {5555-06-01},
urldate = {5555-06-01},
journal = {IEEE Micro},
number = {01},
pages = {1-12},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Computing-in-memory (CIM) architecture has been proven to effectively transcend the memory wall bottleneck, expanding the potential of low-power and high-throughput applications such as machine learning. Neural architecture search (NAS) designs ML models to meet a variety of accuracy, latency, and energy constraints. However, integrating CIM into NAS presents a major challenge due to additional simulation overhead from the non-ideal characteristics of CIM hardware. This work introduces a quantization and device aware accuracy predictor that jointly scores quantization policy, CIM architecture, and neural network architecture, eliminating the need for time-consuming simulations in the search process. We also propose reducing the search space based on architectural observations, resulting in a well-pruned search space customized for CIM. These allow for efficient exploration of superior combinations in mere CPU minutes. Our methodology yields CIMNet, which consistently improves the trade-off between accuracy and hardware efficiency on benchmarks, providing valuable architectural insights.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yan, J.; Liu, J.; Xu, H.; Wang, Z.; Qiao, C.
Peaches: Personalized Federated Learning with Neural Architecture Search in Edge Computing Journal Article
In: IEEE Transactions on Mobile Computing, no. 01, pp. 1-17, 5555, ISSN: 1558-0660.
@article{10460163,
title = {Peaches: Personalized Federated Learning with Neural Architecture Search in Edge Computing},
author = {J. Yan and J. Liu and H. Xu and Z. Wang and C. Qiao},
doi = {10.1109/TMC.2024.3373506},
issn = {1558-0660},
year = {5555},
date = {5555-03-01},
urldate = {5555-03-01},
journal = {IEEE Transactions on Mobile Computing},
number = {01},
pages = {1-17},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {In edge computing (EC), federated learning (FL) enables numerous distributed devices (or workers) to collaboratively train AI models without exposing their local data. Most works of FL adopt a predefined architecture on all participating workers for model training. However, since workers' local data distributions vary heavily in EC, the predefined architecture may not be the optimal choice for every worker. It is also unrealistic to manually design a high-performance architecture for each worker, which requires intense human expertise and effort. In order to tackle this challenge, neural architecture search (NAS) has been applied in FL to automate the architecture design process. Unfortunately, the existing federated NAS frameworks often suffer from the difficulties of system heterogeneity and resource limitation. To remedy this problem, we present a novel framework, termed Peaches, to achieve efficient searching and training in the resource-constrained EC system. Specifically, the local model of each worker is stacked by base cell and personal cell, where the base cell is shared by all workers to capture the common knowledge and the personal cell is customized for each worker to fit the local data. We determine the number of base cells, shared by all workers, according to the bandwidth budget on the parameters server. Besides, to relieve the data and system heterogeneity, we find the optimal number of personal cells for each worker based on its computing capability. In addition, we gradually prune the search space during training to mitigate the resource consumption. We evaluate the performance of Peaches through extensive experiments, and the results show that Peaches can achieve an average accuracy improvement of about 6.29% and up to 3.97× speed up compared with the baselines.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2025
Zhong, Rui; Xu, Yuefeng; Zhang, Chao; Yu, Jun
Efficient multiplayer battle game optimizer for numerical optimization and adversarial robust neural architecture search Journal Article
In: Alexandria Engineering Journal, vol. 113, pp. 150-168, 2025, ISSN: 1110-0168.
@article{ZHONG2025150,
title = {Efficient multiplayer battle game optimizer for numerical optimization and adversarial robust neural architecture search},
author = {Rui Zhong and Yuefeng Xu and Chao Zhang and Jun Yu},
url = {https://www.sciencedirect.com/science/article/pii/S1110016824014935},
doi = {https://doi.org/10.1016/j.aej.2024.11.035},
issn = {1110-0168},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Alexandria Engineering Journal},
volume = {113},
pages = {150-168},
abstract = {This paper introduces a novel metaheuristic algorithm, known as the efficient multiplayer battle game optimizer (EMBGO), specifically designed for addressing complex numerical optimization tasks. The motivation behind this research stems from the need to rectify identified shortcomings in the original MBGO, particularly in search operators during the movement phase, as revealed through ablation experiments. EMBGO mitigates these limitations by integrating the movement and battle phases to simplify the original optimization framework and improve search efficiency. Besides, two efficient search operators: differential mutation and Lévy flight are introduced to increase the diversity of the population. To evaluate the performance of EMBGO comprehensively and fairly, numerical experiments are conducted on benchmark functions such as CEC2017, CEC2020, and CEC2022, as well as engineering problems. Twelve well-established MA approaches serve as competitor algorithms for comparison. Furthermore, we apply the proposed EMBGO to the complex adversarial robust neural architecture search (ARNAS) tasks and explore its robustness and scalability. The experimental results and statistical analyses confirm the efficiency and effectiveness of EMBGO across various optimization tasks. As a potential optimization technique, EMBGO holds promise for diverse applications in real-world problems and deep learning scenarios. The source code of EMBGO is made available in https://github.com/RuiZhong961230/EMBGO.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Man, Wenxing; Xu, Liming; He, Chunlin
Evolutionary architecture search for generative adversarial networks using an aging mechanism-based strategy Journal Article
In: Neural Networks, vol. 181, pp. 106877, 2025, ISSN: 0893-6080.
@article{MAN2025106877,
title = {Evolutionary architecture search for generative adversarial networks using an aging mechanism-based strategy},
author = {Wenxing Man and Liming Xu and Chunlin He},
url = {https://www.sciencedirect.com/science/article/pii/S0893608024008050},
doi = {https://doi.org/10.1016/j.neunet.2024.106877},
issn = {0893-6080},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Neural Networks},
volume = {181},
pages = {106877},
abstract = {Generative Adversarial Networks (GANs) have emerged as a key technology in artificial intelligence, especially in image generation. However, traditionally hand-designed GAN architectures often face significant training stability challenges, which are effectively addressed by our Evolutionary Neural Architecture Search (ENAS) algorithm for GANs, named EAMGAN. This one-shot model automates the design of GAN architectures and employs an Operation Importance Metric (OIM) to enhance training stability. It also incorporates an aging mechanism to optimize the selection process during architecture search. Additionally, the use of a non-dominated sorting algorithm ensures the generation of Pareto-optimal solutions, promoting diversity and preventing premature convergence. We evaluated our method on benchmark datasets, and the results demonstrate that EAMGAN is highly competitive in terms of efficiency and performance. Our method identified an architecture achieving Inception Scores (IS) of 8.83±0.13 and Fréchet Inception Distance (FID) of 9.55 on CIFAR-10 with only 0.66 GPU days. Results on the STL-10, CIFAR-100, and ImageNet32 datasets further demonstrate the robust portability of our architecture.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Liu, Wenbo; Deng, Tao; An, Rui; Yan, Fei
DARTS-CGW: Research on Differentiable Neural Architecture Search Algorithm Based on Coarse Gradient Weighting Proceedings Article
In: Lin, Zhouchen; Cheng, Ming-Ming; He, Ran; Ubul, Kurban; Silamu, Wushouer; Zha, Hongbin; Zhou, Jie; Liu, Cheng-Lin (Ed.): Pattern Recognition and Computer Vision, pp. 31–44, Springer Nature Singapore, Singapore, 2025, ISBN: 978-981-97-8502-5.
@inproceedings{10.1007/978-981-97-8502-5_3,
title = {DARTS-CGW: Research on Differentiable Neural Architecture Search Algorithm Based on Coarse Gradient Weighting},
author = {Wenbo Liu and Tao Deng and Rui An and Fei Yan},
editor = {Zhouchen Lin and Ming-Ming Cheng and Ran He and Kurban Ubul and Wushouer Silamu and Hongbin Zha and Jie Zhou and Cheng-Lin Liu},
url = {https://link.springer.com/chapter/10.1007/978-981-97-8502-5_3},
isbn = {978-981-97-8502-5},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Pattern Recognition and Computer Vision},
pages = {31–44},
publisher = {Springer Nature Singapore},
address = {Singapore},
abstract = {Differential architecture search (DARTS) has emerged as a prominent research area, yet it grapples with a longstanding challenge: the discretization discrepancy problem. This issue directly impedes the search for an optimal model architecture and undermines search algorithm performance. To alleviate this issue, we propose a novel coarse gradient weighting algorithm. Our proposed algorithm has the capability to simulate the discretization process, wherein the architectural parameters move toward both ends. And we integrate this discretization process into the training phase of the architectural parameters, enabling the model to adapt to the discretization process in a trial-and-error fashion. Specifically, based on the architectural parameters in training, we divide the candidate operations into two regions, i.e., the easy-to-select region and the hard-to-be-selected region. The different weighting strategies are implemented in different regions so that the architectural parameters are pushed to the ends. The processed architecture parameters are used for training, which is equivalent to introducing the discretization process into the search phase. Additionally, we use the coarse gradient algorithm to optimize the updating process of the weighting algorithm and theoretically justify the rationality of the coarse gradient weighting algorithm. Extensive experimental results demonstrate that our proposed method can improve the performance of the searched model and make DARTS more robust without adding additional search time.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Garcia-Garcia, Cosijopii; Derbel, Bilel; Morales-Reyes, Alicia; Escalante, Hugo Jair
Speeding up the Multi-objective NAS Through Incremental Learning Proceedings Article
In: Martínez-Villaseñor, Lourdes; Ochoa-Ruiz, Gilberto (Ed.): Advances in Soft Computing, pp. 3–15, Springer Nature Switzerland, Cham, 2025, ISBN: 978-3-031-75543-9.
@inproceedings{10.1007/978-3-031-75543-9_1,
title = {Speeding up the Multi-objective NAS Through Incremental Learning},
author = {Cosijopii Garcia-Garcia and Bilel Derbel and Alicia Morales-Reyes and Hugo Jair Escalante},
editor = {Lourdes Martínez-Villaseñor and Gilberto Ochoa-Ruiz},
url = {https://link.springer.com/chapter/10.1007/978-3-031-75543-9_1},
isbn = {978-3-031-75543-9},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Advances in Soft Computing},
pages = {3–15},
publisher = {Springer Nature Switzerland},
address = {Cham},
abstract = {Deep neural networks (DNNs), particularly convolutional neural networks (CNNs), have garnered significant attention in recent years for addressing a wide range of challenges in image processing and computer vision. Neural architecture search (NAS) has emerged as a crucial field aiming to automate the design and configuration of CNN models. In this paper, we propose a novel strategy to speed up the performance estimation of neural architectures by gradually increasing the size of the training set used for evaluation as the search progresses. We evaluate this approach using the CGP-NASV2 model, a multi-objective NAS method, on the CIFAR-100 dataset. Experimental results demonstrate a notable acceleration in the search process, achieving a speedup of 4.6 times compared to the baseline. Despite using limited data in the early stages, our proposed method effectively guides the search towards competitive architectures. This study highlights the efficacy of leveraging lower-fidelity estimates in NAS and paves the way for further research into accelerating the design of efficient CNN architectures.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Solis-Martin, David; Galan-Paez, Juan; Borrego-Diaz, Joaquin
Bayesian Model Selection Pruning in Predictive Maintenance Proceedings Article
In: Quintián, Héctor; Corchado, Emilio; Lora, Alicia Troncoso; García, Hilde Pérez; Pérez, Esteban Jove; Rolle, José Luis Calvo; de Pisón, Francisco Javier Martínez; Bringas, Pablo García; Álvarez, Francisco Martínez; Herrero, Álvaro; Fosci, Paolo (Ed.): Hybrid Artificial Intelligent Systems, pp. 263–274, Springer Nature Switzerland, Cham, 2025, ISBN: 978-3-031-74183-8.
@inproceedings{10.1007/978-3-031-74183-8_22b,
title = {Bayesian Model Selection Pruning in Predictive Maintenance},
author = {David Solis-Martin and Juan Galan-Paez and Joaquin Borrego-Diaz},
editor = {Héctor Quintián and Emilio Corchado and Alicia Troncoso Lora and Hilde Pérez García and Esteban Jove Pérez and José Luis Calvo Rolle and Francisco Javier Martínez de Pisón and Pablo García Bringas and Francisco Martínez Álvarez and Álvaro Herrero and Paolo Fosci},
url = {https://link.springer.com/chapter/10.1007/978-3-031-74183-8_22},
isbn = {978-3-031-74183-8},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Hybrid Artificial Intelligent Systems},
pages = {263–274},
publisher = {Springer Nature Switzerland},
address = {Cham},
abstract = {Deep Neural Network architecture design significantly impacts the final model performance. The process of searching for optimal architectures, known as Neural Architecture Search (NAS), involves training and evaluating an important number of models. Therefore, mechanisms to reduce the resources required for NAS are highly valuable.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Solis-Martin, David; Galan-Paez, Juan; Borrego-Diaz, Joaquin
Bayesian Model Selection Pruning in Predictive Maintenance Proceedings Article
In: Quintián, Héctor; Corchado, Emilio; Lora, Alicia Troncoso; García, Hilde Pérez; Pérez, Esteban Jove; Rolle, José Luis Calvo; de Pisón, Francisco Javier Martínez; Bringas, Pablo García; Álvarez, Francisco Martínez; Herrero, Álvaro; Fosci, Paolo (Ed.): Hybrid Artificial Intelligent Systems, pp. 263–274, Springer Nature Switzerland, Cham, 2025, ISBN: 978-3-031-74183-8.
@inproceedings{10.1007/978-3-031-74183-8_22,
title = {Bayesian Model Selection Pruning in Predictive Maintenance},
author = {David Solis-Martin and Juan Galan-Paez and Joaquin Borrego-Diaz},
editor = {Héctor Quintián and Emilio Corchado and Alicia Troncoso Lora and Hilde Pérez García and Esteban Jove Pérez and José Luis Calvo Rolle and Francisco Javier Martínez de Pisón and Pablo García Bringas and Francisco Martínez Álvarez and Álvaro Herrero and Paolo Fosci},
url = {https://link.springer.com/chapter/10.1007/978-3-031-74183-8_22},
isbn = {978-3-031-74183-8},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Hybrid Artificial Intelligent Systems},
pages = {263–274},
publisher = {Springer Nature Switzerland},
address = {Cham},
abstract = {Deep Neural Network architecture design significantly impacts the final model performance. The process of searching for optimal architectures, known as Neural Architecture Search (NAS), involves training and evaluating an important number of models. Therefore, mechanisms to reduce the resources required for NAS are highly valuable.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Wang, Weibo; Li, Hua
NAS FD Lung: A novel lung assist diagnostic system based on neural architecture search Journal Article
In: Biomedical Signal Processing and Control, vol. 100, pp. 107022, 2025, ISSN: 1746-8094.
@article{WANG2025107022,
title = {NAS FD Lung: A novel lung assist diagnostic system based on neural architecture search},
author = {Weibo Wang and Hua Li},
url = {https://www.sciencedirect.com/science/article/pii/S1746809424010802},
doi = {https://doi.org/10.1016/j.bspc.2024.107022},
issn = {1746-8094},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Biomedical Signal Processing and Control},
volume = {100},
pages = {107022},
abstract = {In the detection and recognition of lung nodules, pulmonary nodules vary in size and shape and contain many similar tissues and organs around them, leading to the problems of both missed detection and false detection in existing detection algorithms. Designing proprietary detection and recognition networks manually requires substantial professional expertise. This process is time-consuming and labour-intensive and leads to issues like parameter redundancy and improper feature selection. Therefore, this paper proposes a new pulmonary CAD (computer-aided diagnosis) system for pulmonary nodules, NAS FD Lung (Using the NAS approach to search deep FPN and DPN networks), that can automatically learn and generate a deep learning network tailored to pulmonary nodule detection and recognition task requirements. NAS FD Lung aims to use automatic search to generate deep learning networks in the auxiliary diagnosis of pulmonary nodules to replace the manual design of deep learning networks. NAS FD Lung comprises two automatic search networks: BM NAS-FPN (Using NAS methods to search for deep FPN structures with Binary operation and Matrix multiplication fusion methods) network for nodule detection and NAS-A-DPN (Using the NAS approach to search deep DPN networks with attention mechanism) for nodule identification. The proposed technique is tested on the LUNA16 dataset, and the experimental results show that the model is superior to many existing state-of-the-art approaches. The detection accuracy of lung nodules is 98.23%. Regarding the lung nodules classification, the accuracy, specificity, sensitivity, and AUC values achieved 96.32%,97.14%,95.82%, and 98.33%, respectively.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Wu, Zhenpeng; Chen, Jiamin; Al-Sabri, Raeed; Oloulade, Babatounde Moctard; Gao, Jianliang
Asymmetric augmented paradigm-based graph neural architecture search Journal Article
In: Information Processing & Management, vol. 62, no. 1, pp. 103897, 2025, ISSN: 0306-4573.
@article{WU2025103897,
title = {Asymmetric augmented paradigm-based graph neural architecture search},
author = {Zhenpeng Wu and Jiamin Chen and Raeed Al-Sabri and Babatounde Moctard Oloulade and Jianliang Gao},
url = {https://www.sciencedirect.com/science/article/pii/S0306457324002565},
doi = {https://doi.org/10.1016/j.ipm.2024.103897},
issn = {0306-4573},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Information Processing & Management},
volume = {62},
number = {1},
pages = {103897},
abstract = {In most scenarios of graph-based tasks, graph neural networks (GNNs) are trained end-to-end with labeled samples. Labeling graph samples, a time-consuming and expert-dependent process, leads to huge costs. Graph data augmentations can provide a promising method to expand labeled samples cheaply. However, graph data augmentations will damage the capacity of GNNs to distinguish non-isomorphic graphs during the supervised graph representation learning process. How to utilize graph data augmentations to expand labeled samples while preserving the capacity of GNNs to distinguish non-isomorphic graphs is a challenging research problem. To address the above problem, we abstract a novel asymmetric augmented paradigm in this paper and theoretically prove that it offers a principled approach. The asymmetric augmented paradigm can preserve the capacity of GNNs to distinguish non-isomorphic graphs while utilizing augmented labeled samples to improve the generalization capacity of GNNs. To be specific, the asymmetric augmented paradigm will utilize similar yet distinct asymmetric weights to classify the real sample and augmented sample, respectively. To systemically explore the benefits of asymmetric augmented paradigm under different GNN architectures, rather than studying individual asymmetric augmented GNN (A2GNN) instance, we then develop an auto-search engine called Asymmetric Augmented Graph Neural Architecture Search (A2GNAS) to save human efforts. We empirically validate our asymmetric augmented paradigm on multiple graph classification benchmarks, and demonstrate that representative A2GNN instances automatically discovered by our A2GNAS method achieve state-of-the-art performance compared with competitive baselines. Our codes are available at: https://github.com/csubigdata-Organization/A2GNAS.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Jiang, Zhiying; Liu, Risheng; Yang, Shuzhou; Zhang, Zengxi; Fan, Xin
DRNet: Learning a dynamic recursion network for chaotic rain streak removal Journal Article
In: Pattern Recognition, vol. 158, pp. 111004, 2025, ISSN: 0031-3203.
@article{JIANG2025111004,
title = {DRNet: Learning a dynamic recursion network for chaotic rain streak removal},
author = {Zhiying Jiang and Risheng Liu and Shuzhou Yang and Zengxi Zhang and Xin Fan},
url = {https://www.sciencedirect.com/science/article/pii/S0031320324007556},
doi = {https://doi.org/10.1016/j.patcog.2024.111004},
issn = {0031-3203},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Pattern Recognition},
volume = {158},
pages = {111004},
abstract = {Image deraining refers to removing the visible rain streaks to restore the rain-free scenes. Existing methods rely on manually crafted networks to model the distribution of rain streaks. However, complex scenes disrupt the uniformity of rain streak characteristics assumed in ideal conditions, resulting in rain streaks of varying directions, intensities, and brightness intersecting within the same scene, challenging the deep learning based deraining performance. To address the chaotic rain streak removal, we handle the rain streaks with similar distribution characteristics in the same layer and employ a dynamic recursive mechanism to extract and unveil them progressively. Specifically, we employ neural architecture search to determine the models of different rain streaks. To avoid the loss of texture details associated with overly deep structures, we integrate multi-scale modeling and cross-scale recruitment within the dynamic structure. Considering the application of real-world scenes, we incorporate contrastive training to improve the generalization. Experimental results indicate superior performance in rain streak depiction compared to existing methods. Practical evaluation confirms its effectiveness in object detection and semantic segmentation tasks. Code is available at https://github.com/Jzy2017/DRNet.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Rahman, Abdur; Street, Jason; Wooten, James; Marufuzzaman, Mohammad; Gude, Veera G.; Buchanan, Randy; Wang, Haifeng
MoistNet: Machine vision-based deep learning models for wood chip moisture content measurement Journal Article
In: Expert Systems with Applications, vol. 259, pp. 125363, 2025, ISSN: 0957-4174.
@article{Rahman_2025,
title = {MoistNet: Machine vision-based deep learning models for wood chip moisture content measurement},
author = {Abdur Rahman and Jason Street and James Wooten and Mohammad Marufuzzaman and Veera G. Gude and Randy Buchanan and Haifeng Wang},
url = {http://dx.doi.org/10.1016/j.eswa.2024.125363},
doi = {10.1016/j.eswa.2024.125363},
issn = {0957-4174},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Expert Systems with Applications},
volume = {259},
pages = {125363},
publisher = {Elsevier BV},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Öcal, Göktuğ; Özgövde, Atay
Network-aware federated neural architecture search Journal Article
In: Future Generation Computer Systems, vol. 162, pp. 107475, 2025, ISSN: 0167-739X.
@article{OCAL2025107475,
title = {Network-aware federated neural architecture search},
author = {Göktuğ Öcal and Atay Özgövde},
url = {https://www.sciencedirect.com/science/article/pii/S0167739X24004205},
doi = {https://doi.org/10.1016/j.future.2024.07.053},
issn = {0167-739X},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Future Generation Computer Systems},
volume = {162},
pages = {107475},
abstract = {The cooperation between Deep Learning (DL) and edge devices has further advanced technological developments, allowing smart devices to serve as both data sources and endpoints for DL-powered applications. However, the success of DL relies on optimal Deep Neural Network (DNN) architectures, and manually developing such systems requires extensive expertise and time. Neural Architecture Search (NAS) has emerged to automate the search for the best-performing neural architectures. Meanwhile, Federated Learning (FL) addresses data privacy concerns by enabling collaborative model development without exchanging the private data of clients. In a FL system, network limitations can lead to biased model training, slower convergence, and increased communication overhead. On the other hand, traditional DNN architecture design, emphasizing validation accuracy, often overlooks computational efficiency and size constraints of edge devices. This research aims to develop a comprehensive framework that effectively balances trade-offs between model performance, communication efficiency, and the incorporation of FL into an iterative NAS algorithm. This framework aims to overcome challenges by addressing the specific requirements of FL, optimizing DNNs through NAS, and ensuring computational efficiency while considering the network constraints of edge devices. To address these challenges, we introduce Network-Aware Federated Neural Architecture Search (NAFNAS), an open-source federated neural network pruning framework with network emulation support. Through comprehensive testing, we demonstrate the feasibility of our approach, efficiently reducing DNN size and mitigating communication challenges. Additionally, we propose Network and Distribution Aware Client Grouping (NetDAG), a novel client grouping algorithm tailored for FL with diverse DNN architectures, considerably enhancing efficiency of communication rounds and update balance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2024
Zhou, Ao; Yang, Jianlei; Qi, Yingjie; Qiao, Tong; Shi, Yumeng; Duan, Cenlin; Zhao, Weisheng; Hu, Chunming
HGNAS: Hardware-Aware Graph Neural Architecture Search for Edge Devices Journal Article
In: IEEE Transactions on Computers, vol. 73, no. 12, pp. 2693-2707, 2024, ISSN: 1557-9956.
@article{10644077,
title = { HGNAS: Hardware-Aware Graph Neural Architecture Search for Edge Devices },
author = {Ao Zhou and Jianlei Yang and Yingjie Qi and Tong Qiao and Yumeng Shi and Cenlin Duan and Weisheng Zhao and Chunming Hu},
url = {https://doi.ieeecomputersociety.org/10.1109/TC.2024.3449108},
doi = {10.1109/TC.2024.3449108},
issn = {1557-9956},
year = {2024},
date = {2024-12-01},
urldate = {2024-12-01},
journal = {IEEE Transactions on Computers},
volume = {73},
number = {12},
pages = {2693-2707},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Graph Neural Networks (GNNs) are becoming increasingly popular for graph-based learning tasks such as point cloud processing due to their state-of-the-art (SOTA) performance. Nevertheless, the research community has primarily focused on improving model expressiveness, lacking consideration of how to design efficient GNN models for edge scenarios with real-time requirements and limited resources. Examining existing GNN models reveals varied execution across platforms and frequent Out-Of-Memory (OOM) problems, highlighting the need for hardware-aware GNN design. To address this challenge, this work proposes a novel hardware-aware graph neural architecture search framework tailored for resource constraint edge devices, namely HGNAS. To achieve hardware awareness, HGNAS integrates an efficient GNN hardware performance predictor that evaluates the latency and peak memory usage of GNNs in milliseconds. Meanwhile, we study GNN memory usage during inference and offer a peak memory estimation method, enhancing the robustness of architecture evaluations when combined with predictor outcomes. Furthermore, HGNAS constructs a fine-grained design space to enable the exploration of extreme performance architectures by decoupling the GNN paradigm. In addition, the multi-stage hierarchical search strategy is leveraged to facilitate the navigation of huge candidates, which can reduce the single search time to a few GPU hours. To the best of our knowledge, HGNAS is the first automated GNN design framework for edge devices, and also the first work to achieve hardware awareness of GNNs across different platforms. Extensive experiments across various applications and edge devices have proven the superiority of HGNAS. It can achieve up to a $10.6boldsymboltimes$10.6× speedup and an $82.5%$82.5% peak memory reduction with negligible accuracy loss compared to DGCNN on ModelNet40.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Gai; Cao, Chunhong; Fu, Huawei; Li, Xingxing; Gao, Xieping
Modeling Functional Brain Networks for ADHD via Spatial Preservation-Based Neural Architecture Search Journal Article
In: IEEE J Biomed Health Inform , 2024.
@article{Li-BHI24a,
title = { Modeling Functional Brain Networks for ADHD via Spatial Preservation-Based Neural Architecture Search },
author = {Gai Li and Chunhong Cao and Huawei Fu and Xingxing Li and Xieping Gao
},
url = {https://pubmed.ncbi.nlm.nih.gov/39167518/},
year = {2024},
date = {2024-11-28},
urldate = {2024-11-28},
journal = { IEEE J Biomed Health Inform },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Sun, Hongmin; Kan, Ao; Liu, Jianhao; Du, Wei
HG-search: multi-stage search for heterogeneous graph neural networks Journal Article
In: Applied Intelligence , 2024.
@article{sun-applieint-24a,
title = {HG-search: multi-stage search for heterogeneous graph neural networks},
author = {
Hongmin Sun and Ao Kan and Jianhao Liu and Wei Du
},
url = {https://link.springer.com/article/10.1007/s10489-024-06058-w},
year = {2024},
date = {2024-11-19},
urldate = {2024-11-19},
journal = {Applied Intelligence },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Jia Ma,; Ma, Xinru; Li, Chulian; Li, Tongyan
Vehicle-drone collaborative distribution path planning based on neural architecture search under the influence of carbon emissions Journal Article
In: Discover Computing , vol. 27, 2024.
@article{ma-dc24a,
title = {Vehicle-drone collaborative distribution path planning based on neural architecture search under the influence of carbon emissions},
author = {
Jia Ma, and Xinru Ma and Chulian Li and Tongyan Li
},
url = {https://link.springer.com/article/10.1007/s10791-024-09469-y},
year = {2024},
date = {2024-11-11},
urldate = {2024-11-11},
journal = {Discover Computing },
volume = {27},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Patil, Tejwardhan
Quantum-Enhanced Neural Architecture Search (Q-NAS) Technical Report
2024.
@techreport{nokey,
title = {Quantum-Enhanced Neural Architecture Search (Q-NAS)},
author = {Tejwardhan Patil},
url = {https://openreview.net/pdf/eacda89e6b55648ad4512decd7c711e3be063033.pdf},
year = {2024},
date = {2024-11-01},
urldate = {2024-11-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Li, Jialin; Cao, Xuan; Chen, Renxiang; Zhao, Chengying; Huang, Xianzhen
Network architecture search methods for constructing efficient fault diagnosis models in rotating machinery Journal Article
In: Measurement Science and Technology, vol. 36, no. 1, pp. 016144, 2024.
@article{Li_2025,
title = {Network architecture search methods for constructing efficient fault diagnosis models in rotating machinery},
author = {Jialin Li and Xuan Cao and Renxiang Chen and Chengying Zhao and Xianzhen Huang},
url = {https://dx.doi.org/10.1088/1361-6501/ad8f4c},
doi = {10.1088/1361-6501/ad8f4c},
year = {2024},
date = {2024-11-01},
urldate = {2024-11-01},
journal = {Measurement Science and Technology},
volume = {36},
number = {1},
pages = {016144},
publisher = {IOP Publishing},
abstract = {The development of high-performance fault diagnosis models for specific tasks requires substantial expertise. Neural architecture search (NAS) offers a promising solution, but most NAS methodologies are hampered by lengthy search durations and low efficiency, and few researchers have applied these methods within the fault diagnosis domain. This paper introduces a novel differentiable architecture search method tailored for constructing efficient fault diagnosis models for rotating machinery, designed to rapidly and effectively search for network models suitable for specific datasets. Specifically, this study constructs a completely new and advanced search space, incorporating various efficient, lightweight convolutional operations to reduce computational complexity. To enhance the stability of the differentiable network architecture search process and reduce fluctuations in model accuracy, this study proposes a novel Multi-scale Pyramid Squeeze Attention module. This module aids in the learning of richer multi-scale feature representations and adaptively recalibrates the weights of multi-dimensional channel attention. The proposed method was validated on two rotating machinery fault datasets, demonstrating superior performance compared to manually designed networks and general network search methods, with notably improved diagnostic effectiveness.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Gao, Jianhua; Liu, Zeming; Wang, Yizhuo; Ji, Weixing
RaNAS: Resource-Aware Neural Architecture Search for Edge Computing Journal Article
In: ACM Trans. Archit. Code Optim., 2024, ISSN: 1544-3566, (Just Accepted).
@article{10.1145/3703353,
title = {RaNAS: Resource-Aware Neural Architecture Search for Edge Computing},
author = {Jianhua Gao and Zeming Liu and Yizhuo Wang and Weixing Ji},
url = {https://doi.org/10.1145/3703353},
doi = {10.1145/3703353},
issn = {1544-3566},
year = {2024},
date = {2024-11-01},
urldate = {2024-11-01},
journal = {ACM Trans. Archit. Code Optim.},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
abstract = {Neural architecture search (NAS) for edge devices is often time-consuming because of long-latency deploying and testing on edge devices. The ability to accurately predict the computation cost and memory requirement for convolutional neural networks (CNNs) in advance holds substantial value. Existing work primarily relies on analytical models, which can result in high prediction errors. This paper proposes a resource-aware NAS (RaNAS) model based on various features. Additionally, a new graph neural network is introduced to predict inference latency and maximum memory requirements for CNNs on edge devices. Experimental results show that, within the error bound of ±1%, RaNAS achieves an accuracy improvement of approximately 8% for inference latency prediction and about 25% for maximum memory occupancy prediction over the state-of-the-art approaches.},
note = {Just Accepted},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Lovison-Franco, Bruno; Miquel, Jonathan; Romdhane, Aymen; Prenat, Guillaume; Anghel, Lorena; Novo, David; Benoit, Pascal
Trade-offs in Neural Network Compression: Quantized and Binary Models for Keyword Spotting Proceedings Article
In: ICECS 2024 - 31st IEEE International Conference on Electronics Circuits and Systems, pp. In press, Nancy, France, 2024.
@inproceedings{lovisonfranco:lirmm-04717703,
title = {Trade-offs in Neural Network Compression: Quantized and Binary Models for Keyword Spotting},
author = {Bruno Lovison-Franco and Jonathan Miquel and Aymen Romdhane and Guillaume Prenat and Lorena Anghel and David Novo and Pascal Benoit},
url = {https://hal-lirmm.ccsd.cnrs.fr/lirmm-04717703},
year = {2024},
date = {2024-11-01},
urldate = {2024-11-01},
booktitle = {ICECS 2024 - 31st IEEE International Conference on Electronics Circuits and Systems},
pages = {In press},
address = {Nancy, France},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Kolodochka, Dmytro; Polyakova, Marina; Nesteriuk, Oleksandr; Makarichev, Victor
LaMa network architecture search for image inpainting Journal Article
In: ICST-2024: Information Control Systems & Technologies, , 2024.
@article{Kolodochka,
title = {LaMa network architecture search for image inpainting},
author = {Dmytro Kolodochka and Marina Polyakova and Oleksandr Nesteriuk and Victor Makarichev},
url = {https://ceur-ws.org/Vol-3790/paper32.pdf},
year = {2024},
date = {2024-10-03},
urldate = {2024-10-03},
journal = {ICST-2024: Information Control Systems & Technologies, },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
D'souza, Melwin; Gurpur, Ananth Prabhu; Kumara, Varuna
SANAS-Net: spatial attention neural architecture search for breast cancer detection Journal Article
In: IAES International Journal of Artificial Intelligence (IJ-AI), vol. 13, no. 3, 2024, ISBN: 2252-8938.
@article{souza-24a,
title = {SANAS-Net: spatial attention neural architecture search for breast cancer detection},
author = {Melwin D'souza and Ananth Prabhu Gurpur and Varuna Kumara},
url = {https://ijai.iaescore.com/index.php/IJAI/article/view/24632},
doi = {10.11591/ijai.v13.i3.pp3339-3349},
isbn = {2252-8938},
year = {2024},
date = {2024-10-02},
urldate = {2024-10-02},
journal = {IAES International Journal of Artificial Intelligence (IJ-AI)},
volume = {13},
number = {3},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Shutao; Chen, Qian
Deep Learning Approach for CSI Feedback in Massive MIMO Technical Report
2024.
@techreport{nokey,
title = {Deep Learning Approach for CSI Feedback in Massive MIMO},
author = {Shutao Zhang and Qian Chen},
url = {https://www.researchgate.net/profile/Shutao-Zhang-5/publication/385215004_Deep_Learning_Approach_for_CSI_Feedback_in_Massive_MIMO_A_Competition_Report/links/671afd72edbc012ea13d14ab/Deep-Learning-Approach-for-CSI-Feedback-in-Massive-MIMO-A-Competition-Report.pdf},
year = {2024},
date = {2024-10-02},
urldate = {2024-10-02},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Bargagna, Filippo; Zigrino, Donato; Santi, Lisa Anita De; Genovesi, Dario; Scipioni, Michele; Favilli, Brunella; Vergaro, Giuseppe; Emdin, Michele; Giorgetti, Assuero; Positano, Vincenzo; Santarelli, Maria Filomena
Automated Neural Architecture Search for Cardiac Amyloidosis Classification from [18F]-Florbetaben PET Images Journal Article
In: 2024.
@article{Bargagna-jiim24a,
title = {Automated Neural Architecture Search for Cardiac Amyloidosis Classification from [18F]-Florbetaben PET Images},
author = {
Filippo Bargagna and Donato Zigrino and Lisa Anita De Santi and Dario Genovesi and Michele Scipioni and Brunella Favilli and Giuseppe Vergaro and Michele Emdin and Assuero Giorgetti and Vincenzo Positano and Maria Filomena Santarelli
},
url = {https://link.springer.com/article/10.1007/s10278-024-01275-8},
year = {2024},
date = {2024-10-02},
urldate = {2024-10-02},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Patel, Kush
AutoML and Automated Data Science by Democratizing AI through End-to-End Automation Journal Article
In: Ijraset Journal For Research in Applied Science and Engineering Technology, 2024.
@article{nokey,
title = {AutoML and Automated Data Science by Democratizing AI through End-to-End Automation},
author = {Kush Patel },
url = {https://www.ijraset.com/research-paper/automl-and-automated-data-science-by-democratizing-ai-through-end-to-end-automation},
year = {2024},
date = {2024-10-01},
urldate = {2024-10-01},
journal = {Ijraset Journal For Research in Applied Science and Engineering Technology},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kadway, Chetan; Mukhopadhyay, Shalini; Islam, Syed Mujibul; Choudhury, Abhishek Roy; Dey, Swarnava; Dey, Sounak; Mukherjee, Arijit; Pal, Arpan
Zero-Shot Embedded Neural Architecture Search for On-board Satellite Tasks & Hardware Accelerators Proceedings Article
In: Dold, Dominik; Hadjiivanov, Alexander; Izzo, Dario (Ed.): Proceedings of SPAICE2024: The First Joint European Space Agency / IAA Conference on AI in and for Space, pp. 175-179, 2024.
@inproceedings{2024sais.conf..175K,
title = {Zero-Shot Embedded Neural Architecture Search for On-board Satellite Tasks & Hardware Accelerators},
author = {Chetan Kadway and Shalini Mukhopadhyay and Syed Mujibul Islam and Abhishek Roy Choudhury and Swarnava Dey and Sounak Dey and Arijit Mukherjee and Arpan Pal},
editor = {Dominik Dold and Alexander Hadjiivanov and Dario Izzo},
url = {https://zenodo.org/records/13885553},
doi = {10.5281/zenodo.13885553},
year = {2024},
date = {2024-10-01},
urldate = {2024-10-01},
booktitle = {Proceedings of SPAICE2024: The First Joint European Space Agency / IAA Conference on AI in and for Space},
pages = {175-179},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Lou, Xiaoxuan
Towards security analysis and design of confidential computing systems PhD Thesis
2024.
@phdthesis{lou-phd24a,
title = {Towards security analysis and design of confidential computing systems},
author = {Lou, Xiaoxuan},
url = {https://dr.ntu.edu.sg/handle/10356/180639},
year = {2024},
date = {2024-10-01},
urldate = {2024-10-01},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Huang, Lin; Qin, Xi; Yang, Tiejun
MFC-NAS: Multifunctional Cells Based Neural Architecture Search for Plant Images Classification Miscellaneous
2024.
@misc{huang-,
title = {MFC-NAS: Multifunctional Cells Based Neural Architecture Search for Plant Images Classification},
author = {Lin Huang and Xi Qin and Tiejun Yang
},
url = {https://www.researchsquare.com/article/rs-4889773/v1},
year = {2024},
date = {2024-10-01},
urldate = {2024-10-01},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
Cari, Kamal Ud-Din; Mercury, Marin; Alaba, Guiomar
A Snapshot of Tiny AI: Innovations in Model Compression and Deployment Journal Article
In: Innovations in Model Compression and Deployment, 2024.
@article{Cari-24a,
title = {A Snapshot of Tiny AI: Innovations in Model Compression and Deployment},
author = { Kamal Ud-Din Cari and Marin Mercury and Guiomar Alaba},
url = {https://essopenarchive.org/doi/full/10.22541/au.172894062.27932664},
year = {2024},
date = {2024-10-01},
urldate = {2024-10-01},
journal = {Innovations in Model Compression and Deployment},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Jiao, Qing; Hu, Weifei; Hao, Guangbo; Cheng, Jin; Peng, Xiang; Liu, Zhenyu; Tan, Jianrong
In: Journal of Intelligent Manufacturing , 2024.
@article{jiao-jim24a,
title = {A digital twin of intelligent robotic grasping based on single-loop-optimized differentiable architecture search and sim-real collaborative learning},
author = {
Qing Jiao and Weifei Hu and Guangbo Hao and Jin Cheng and Xiang Peng and Zhenyu Liu and Jianrong Tan
},
url = {https://link.springer.com/article/10.1007/s10845-024-02498-w},
year = {2024},
date = {2024-10-01},
urldate = {2024-10-01},
journal = { Journal of Intelligent Manufacturing },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Awarayi, Nicodemus Songose; Twum, Frimpong; yemang, Kwabena Owusu-Ag Kwabena Owusu-Agyemang
A Neural Architecture Search CNN for Alzheimer’s Disease Classification Journal Article
In: ECTI Transactions on CIT, 2024.
@article{nokey,
title = { A Neural Architecture Search CNN for Alzheimer’s Disease Classification},
author = { Nicodemus Songose Awarayi and Frimpong Twum and Kwabena Owusu-Ag Kwabena Owusu-Agyemang yemang },
url = {https://ph01.tci-thaijo.org/index.php/ecticit/article/view/255728},
year = {2024},
date = {2024-10-01},
urldate = {2024-10-01},
journal = {ECTI Transactions on CIT},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Loroch, Dominik Marek
Hardware-Aware Neural Architecture Search Book Section
In: 2024.
@incollection{Loroch-b24a,
title = {Hardware-Aware Neural Architecture Search},
author = {Dominik Marek Loroch},
url = {https://link.springer.com/content/pdf/10.1007/978-3-031-66253-9.pdf#page=323},
doi = {https://doi.org/10.1007/978-3-031-66253-9},
year = {2024},
date = {2024-10-01},
keywords = {},
pubstate = {published},
tppubtype = {incollection}
}
Li, Guihong
Theoretically-grounded efficient deep learning system design Bachelor Thesis
2024.
@bachelorthesis{li-phd24a,
title = {Theoretically-grounded efficient deep learning system design},
author = {Li, Guihong},
url = {https://repositories.lib.utexas.edu/items/858373c2-a995-44af-9054-3139159616d0},
year = {2024},
date = {2024-10-01},
urldate = {2024-10-01},
keywords = {},
pubstate = {published},
tppubtype = {bachelorthesis}
}
Slimani, Hicham; Mhamdi, Jamal El; Jilbab, Abdelilah
Open-access Deep Learning Structure for Real-time Crop Monitoring Based on Neural Architecture Search and UAV Journal Article
In: Braz. arch. biol. technol., 2024.
@article{nokey,
title = { Open-access Deep Learning Structure for Real-time Crop Monitoring Based on Neural Architecture Search and UAV },
author = {Hicham Slimani and Jamal El Mhamdi and Abdelilah Jilbab },
url = {https://doi.org/10.1590/1678-4324-2024231141},
year = {2024},
date = {2024-10-01},
urldate = {2024-10-01},
journal = {Braz. arch. biol. technol.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Han, Shihao; Liu, Sishuo; Du, Shucheng; Li, Mingzi; Ye, Zijian; Xu, Xiaoxin; Li, Yi; Wang, Zhongrui; Shang, Dashan
CMN: a co-designed neural architecture search for efficient computing-in-memory-based mixture-of-experts Journal Article
In: Science China Information Sciences, 2024.
@article{han-scis24a,
title = {CMN: a co-designed neural architecture search for efficient computing-in-memory-based mixture-of-experts},
author = {
Shihao Han and Sishuo Liu and Shucheng Du and Mingzi Li and Zijian Ye and Xiaoxin Xu and Yi Li and Zhongrui Wang and Dashan Shang
},
url = {https://link.springer.com/article/10.1007/s11432-024-4144-y},
year = {2024},
date = {2024-10-01},
urldate = {2024-10-01},
journal = {Science China Information Sciences},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Xiong, Suqin; Li, Yang; Wang, Jun; Zhang, Zhi; Wang, Hao; Lu, Lijun
Anti-islanding image detection and optimization of distributed power supply using neural network architecture search Journal Article
In: Discover Computing , 2024.
@article{xiong-dc24a,
title = {Anti-islanding image detection and optimization of distributed power supply using neural network architecture search},
author = {Suqin Xiong and Yang Li and Jun Wang and Zhi Zhang and Hao Wang and Lijun Lu
},
url = {https://link.springer.com/article/10.1007/s10791-024-09468-z},
year = {2024},
date = {2024-09-27},
urldate = {2024-09-27},
journal = {Discover Computing },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Ma, QuanGong; Hao, ChaoLong; Yang, XuKui; Qian, LongLong; Zhang, Hao; Si, NianWen; Xu, MinChen; Qu, Dan
Continuous evolution for efficient quantum architecture search Journal Article
In: EPJ Quantum Technology , 2024.
@article{maeükqt24a,
title = {Continuous evolution for efficient quantum architecture search},
author = {
QuanGong Ma and ChaoLong Hao and XuKui Yang and LongLong Qian and Hao Zhang and NianWen Si and MinChen Xu and Dan Qu
},
url = {https://link.springer.com/article/10.1140/epjqt/s40507-024-00265-7},
year = {2024},
date = {2024-09-06},
urldate = {2024-09-06},
journal = {EPJ Quantum Technology },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhao, ZiHao; Tang, XiangHong; Lu, JianGuang; Huang, Yong
Lightweight graph neural network architecture search based on heuristic algorithms Journal Article
In: International Journal of Machine Learning and Cybernetics , 2024.
@article{zhao-ijmlc24a,
title = {Lightweight graph neural network architecture search based on heuristic algorithms},
author = {ZiHao Zhao and XiangHong Tang and JianGuang Lu and Yong Huang
},
url = {https://link.springer.com/article/10.1007/s13042-024-02356-4},
year = {2024},
date = {2024-09-04},
urldate = {2024-09-04},
journal = { International Journal of Machine Learning and Cybernetics },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
(Ed.)
2024.
@collection{Ringhofer-daf24a,
title = {BALANCING ERROR AND LATENCY OF BLACK-BOX MODELS FOR AUDIO EFFECTS USING HARDWARE-AWARE NEURAL ARCHITECTURE SEARCH},
author = {Christopher Ringhofer and Alexa Gnoss and Gregor Schiele},
url = {https://www.dafx.de/paper-archive/2024/papers/DAFx24_paper_44.pdf},
year = {2024},
date = {2024-09-03},
urldate = {2024-09-03},
booktitle = {Proceedings of the 27th International Conference on Digital Audio Effects (DAFx24) },
journal = {Proceedings of the 27th International Conference on Digital Audio Effects (DAFx24) },
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
Fu, Wei; Lou, Wenqi; Gong, Lei; Wang, Chao; Zhou, Xuehai
Beyond Training: A Zero-Shot Framework to Neural Architecture and Accelerator Co-Exploration Proceedings Article
In: 2024 IEEE International Conference on Cluster Computing Workshops (CLUSTER Workshops), pp. 148-149, IEEE Computer Society, Los Alamitos, CA, USA, 2024.
@inproceedings{10740908,
title = { Beyond Training: A Zero-Shot Framework to Neural Architecture and Accelerator Co-Exploration },
author = {Wei Fu and Wenqi Lou and Lei Gong and Chao Wang and Xuehai Zhou},
url = {https://doi.ieeecomputersociety.org/10.1109/CLUSTERWorkshops61563.2024.00032},
doi = {10.1109/CLUSTERWorkshops61563.2024.00032},
year = {2024},
date = {2024-09-01},
urldate = {2024-09-01},
booktitle = {2024 IEEE International Conference on Cluster Computing Workshops (CLUSTER Workshops)},
pages = {148-149},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Recently, the co-exploration of neural architectures and accelerators has become imperative to achieve high accuracy and hardware performance. However, existing methods face a huge training burden during neural network search, which prolongs the search time and increases the task-switching deployment cost. Additionally, their limited hardware search space restricts the potential for optimal results. To address these issues, we propose an efficient zero-shot framework for quickly identifying optimized neural architectures and accelerators based on given requirements. First, we design a training-free proxy to rank the accuracy of the network efficiently, thereby eliminating the need for the network training process. Second, we conduct a finer-grained hardware search space to achieve optimal performance. Preliminary experiments demonstrate that, compared to prior methods, our approach enhances search efficiency by $mathbf9.67times$ while optimizing the hardware Energy-Delay Product (EDP) by up to $mathbf24.6times$, without compromising network accuracy.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Krzywda, Maciej; Łukasik, Szymon; H, Amir Gandomi
Cartesian Genetic Programming Approach for Designing Convolutional Neural Networks Technical Report
2024.
@techreport{2024arXiv241000129K,
title = {Cartesian Genetic Programming Approach for Designing Convolutional Neural Networks},
author = {Maciej Krzywda and Szymon Łukasik and Amir Gandomi H},
doi = {10.48550/arXiv.2410.00129},
year = {2024},
date = {2024-09-01},
urldate = {2024-09-01},
journal = {arXiv e-prints},
pages = {arXiv:2410.00129},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Hao, Zixiang
Advancing Lung Cancer Diagnosis: Federated Learning-Based Privacy Innovations Miscellaneous
2024.
@misc{hao-24a,
title = {Advancing Lung Cancer Diagnosis: Federated Learning-Based Privacy Innovations},
author = {Zixiang Hao},
url = {https://www.scitepress.org/Papers/2024/129388/129388.pdf},
year = {2024},
date = {2024-09-01},
urldate = {2024-09-01},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
Wang, Di
Federated Learning Using GPT-4 Boosted Particle Swarm Optimization for Compact Neural Architecture Search Journal Article
In: Journal of Advances in Information Technology, 2024.
@article{wang-jait24a,
title = {Federated Learning Using GPT-4 Boosted Particle Swarm Optimization for Compact Neural Architecture Search},
author = {Di Wang},
url = {https://www.jait.us/articles/2024/JAIT-V15N9-1011.pdf},
year = {2024},
date = {2024-09-01},
urldate = {2024-09-01},
journal = {Journal of Advances in Information Technology},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
(Ed.)
2024.
@collection{park-interspeech24a,
title = {RepTor: Re-parameterizable Temporal Convolution for Keyword Spotting via Differentiable Kernel Search},
author = {Eunik Park and Daehyun Ahn and Hyungjun Kim},
url = {https://www.isca-archive.org/interspeech_2024/park24_interspeech.pdf},
year = {2024},
date = {2024-09-01},
urldate = {2024-09-01},
booktitle = {Interspeech 2024},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
Dai, Menghang; Liu, Zhiliang; He, Zixiao
Wafer defect pattern recognition based on differentiable architecture search with dual attention module Journal Article
In: Measurement Science and Technology, vol. 35, no. 12, pp. 125102, 2024.
@article{Dai_2024,
title = {Wafer defect pattern recognition based on differentiable architecture search with dual attention module},
author = {Menghang Dai and Zhiliang Liu and Zixiao He},
url = {https://dx.doi.org/10.1088/1361-6501/ad730b},
doi = {10.1088/1361-6501/ad730b},
year = {2024},
date = {2024-09-01},
urldate = {2024-09-01},
journal = {Measurement Science and Technology},
volume = {35},
number = {12},
pages = {125102},
publisher = {IOP Publishing},
abstract = {Wafer defect pattern recognition is a crucial process for ensuring chip production quality. Due to the complexity of wafer production processes, wafers often contain multiple defect patterns simultaneously, making it challenging for existing deep learning algorithms designed for single defect patterns to achieve optimal performance. To address this issue, this paper proposes a dual attention integrated differentiable architecture search (DA-DARTS), which can automatically search for suitable neural network architectures, significantly simplifying the architecture design process. Furthermore, the integration of DA greatly enhances the efficiency of the architecture search. We validated our proposed method on the MixedWM38 dataset, and experimental results indicate that the DA-DARTS method achieves higher pattern recognition accuracy under mixed defect patterns compared to baseline methods, maintaining performance stability even on imbalanced datasets.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Ouertatani, Houssem; Maxim, Cristian; Niar, Smail; Talbi, El-Ghazali
Accelerated NAS via pretrained ensembles and multi-fidelity Bayesian Optimization Proceedings Article
In: 33rd International Conference on Artificial Neural Networks (ICANN), Lugano, Switzerland, 2024.
@inproceedings{ouertatani:hal-04611343,
title = {Accelerated NAS via pretrained ensembles and multi-fidelity Bayesian Optimization},
author = {Houssem Ouertatani and Cristian Maxim and Smail Niar and El-Ghazali Talbi},
url = {https://hal.science/hal-04611343},
year = {2024},
date = {2024-09-01},
urldate = {2024-09-01},
booktitle = {33rd International Conference on Artificial Neural Networks (ICANN)},
address = {Lugano, Switzerland},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}