Maintained by Difan Deng and Marius Lindauer.
The following list considers papers related to neural architecture search. It is by no means complete. If you miss a paper on the list, please let us know.
Please note that although NAS methods steadily improve, the quality of empirical evaluations in this field are still lagging behind compared to other areas in machine learning, AI and optimization. We would therefore like to share some best practices for empirical evaluations of NAS methods, which we believe will facilitate sustained and measurable progress in the field. If you are interested in a teaser, please read our blog post or directly jump to our checklist.
Transformers have gained increasing popularity in different domains. For a comprehensive list of papers focusing on Neural Architecture Search for Transformer-Based spaces, the awesome-transformer-search repo is all you need.
5555
Zhu, Huijuan; Xia, Mengzhen; Wang, Liangmin; Xu, Zhicheng; Sheng, Victor S.
A Novel Knowledge Search Structure for Android Malware Detection Journal Article
In: IEEE Transactions on Services Computing, no. 01, pp. 1-14, 5555, ISSN: 1939-1374.
@article{10750332,
title = { A Novel Knowledge Search Structure for Android Malware Detection },
author = {Huijuan Zhu and Mengzhen Xia and Liangmin Wang and Zhicheng Xu and Victor S. Sheng},
url = {https://doi.ieeecomputersociety.org/10.1109/TSC.2024.3496333},
doi = {10.1109/TSC.2024.3496333},
issn = {1939-1374},
year = {5555},
date = {5555-11-01},
urldate = {5555-11-01},
journal = {IEEE Transactions on Services Computing},
number = {01},
pages = {1-14},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {While the Android platform is gaining explosive popularity, the number of malicious software (malware) is also increasing sharply. Thus, numerous malware detection schemes based on deep learning have been proposed. However, they are usually suffering from the cumbersome models with complex architectures and tremendous parameters. They usually require heavy computation power support, which seriously limit their deployment on actual application environments with limited resources (e.g., mobile edge devices). To surmount this challenge, we propose a novel Knowledge Distillation (KD) structure—Knowledge Search (KS). KS exploits Neural Architecture Search (NAS) to adaptively bridge the capability gap between teacher and student networks in KD by introducing a parallelized student-wise search approach. In addition, we carefully analyze the characteristics of malware and locate three cost-effective types of features closely related to malicious attacks, namely, Application Programming Interfaces (APIs), permissions and vulnerable components, to characterize Android Applications (Apps). Therefore, based on typical samples collected in recent years, we refine features while exploiting the natural relationship between them, and construct corresponding datasets. Massive experiments are conducted to investigate the effectiveness and sustainability of KS on these datasets. Our experimental results show that the proposed method yields an accuracy of 97.89% to detect Android malware, which performs better than state-of-the-art solutions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Feifei; Li, Mao; Ge, Jidong; Tang, Fenghui; Zhang, Sheng; Wu, Jie; Luo, Bin
Privacy-Preserving Federated Neural Architecture Search With Enhanced Robustness for Edge Computing Journal Article
In: IEEE Transactions on Mobile Computing, no. 01, pp. 1-18, 5555, ISSN: 1558-0660.
@article{10742476,
title = { Privacy-Preserving Federated Neural Architecture Search With Enhanced Robustness for Edge Computing },
author = {Feifei Zhang and Mao Li and Jidong Ge and Fenghui Tang and Sheng Zhang and Jie Wu and Bin Luo},
url = {https://doi.ieeecomputersociety.org/10.1109/TMC.2024.3490835},
doi = {10.1109/TMC.2024.3490835},
issn = {1558-0660},
year = {5555},
date = {5555-11-01},
urldate = {5555-11-01},
journal = {IEEE Transactions on Mobile Computing},
number = {01},
pages = {1-18},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {With the development of large-scale artificial intelligence services, edge devices are becoming essential providers of data and computing power. However, these edge devices are not immune to malicious attacks. Federated learning (FL), while protecting privacy of decentralized data through secure aggregation, struggles to trace adversaries and lacks optimization for heterogeneity. We discover that FL augmented with Differentiable Architecture Search (DARTS) can improve resilience against backdoor attacks while compatible with secure aggregation. Based on this, we propose a federated neural architecture search (NAS) framwork named SLNAS. The architecture of SLNAS is built on three pivotal components: a server-side search space generation method that employs an evolutionary algorithm with dual encodings, a federated NAS process based on DARTS, and client-side architecture tuning that utilizes Gumbel softmax combined with knowledge distillation. To validate robustness, we adapt a framework that includes backdoor attacks based on trigger optimization, data poisoning, and model poisoning, targeting both model weights and architecture parameters. Extensive experiments demonstrate that SLNAS not only effectively counters advanced backdoor attacks but also handles heterogeneity, outperforming defense baselines across a wide range of backdoor attack scenarios.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Yu-Ming; Hsieh, Jun-Wei; Lee, Chun-Chieh; Fan, Kuo-Chin
RATs-NAS: Redirection of Adjacent Trails on Graph Convolutional Networks for Predictor-based Neural Architecture Search Journal Article
In: IEEE Transactions on Artificial Intelligence, vol. 1, no. 01, pp. 1-11, 5555, ISSN: 2691-4581.
@article{10685480,
title = { RATs-NAS: Redirection of Adjacent Trails on Graph Convolutional Networks for Predictor-based Neural Architecture Search },
author = {Yu-Ming Zhang and Jun-Wei Hsieh and Chun-Chieh Lee and Kuo-Chin Fan},
url = {https://doi.ieeecomputersociety.org/10.1109/TAI.2024.3465433},
doi = {10.1109/TAI.2024.3465433},
issn = {2691-4581},
year = {5555},
date = {5555-09-01},
urldate = {5555-09-01},
journal = {IEEE Transactions on Artificial Intelligence},
volume = {1},
number = {01},
pages = {1-11},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Manually designed CNN architectures like VGG, ResNet, DenseNet, and MobileNet have achieved high performance across various tasks, but design them is time-consuming and costly. Neural Architecture Search (NAS) automates the discovery of effective CNN architectures, reducing the need for experts. However, evaluating candidate architectures requires significant GPU resources, leading to the use of predictor-based NAS, such as graph convolutional networks (GCN), which is the popular option to construct predictors. However, we discover that, even though the ability of GCN mimics the propagation of features of real architectures, the binary nature of the adjacency matrix limits its effectiveness. To address this, we propose Redirection of Adjacent Trails (RATs), which adaptively learns trail weights within the adjacency matrix. Our RATs-GCN outperform other predictors by dynamically adjusting trail weights after each graph convolution layer. Additionally, the proposed Divide Search Sampling (DSS) strategy, based on the observation of cell-based NAS that architectures with similar FLOPs perform similarly, enhances search efficiency. Our RATs-NAS, which combine RATs-GCN and DSS, shows significant improvements over other predictor-based NAS methods on NASBench-101, NASBench-201, and NASBench-301.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Chen, X.; Yang, C.
CIMNet: Joint Search for Neural Network and Computing-in-Memory Architecture Journal Article
In: IEEE Micro, no. 01, pp. 1-12, 5555, ISSN: 1937-4143.
@article{10551739,
title = {CIMNet: Joint Search for Neural Network and Computing-in-Memory Architecture},
author = {X. Chen and C. Yang},
url = {https://www.computer.org/csdl/magazine/mi/5555/01/10551739/1XyKBmSlmPm},
doi = {10.1109/MM.2024.3409068},
issn = {1937-4143},
year = {5555},
date = {5555-06-01},
urldate = {5555-06-01},
journal = {IEEE Micro},
number = {01},
pages = {1-12},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Computing-in-memory (CIM) architecture has been proven to effectively transcend the memory wall bottleneck, expanding the potential of low-power and high-throughput applications such as machine learning. Neural architecture search (NAS) designs ML models to meet a variety of accuracy, latency, and energy constraints. However, integrating CIM into NAS presents a major challenge due to additional simulation overhead from the non-ideal characteristics of CIM hardware. This work introduces a quantization and device aware accuracy predictor that jointly scores quantization policy, CIM architecture, and neural network architecture, eliminating the need for time-consuming simulations in the search process. We also propose reducing the search space based on architectural observations, resulting in a well-pruned search space customized for CIM. These allow for efficient exploration of superior combinations in mere CPU minutes. Our methodology yields CIMNet, which consistently improves the trade-off between accuracy and hardware efficiency on benchmarks, providing valuable architectural insights.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yan, J.; Liu, J.; Xu, H.; Wang, Z.; Qiao, C.
Peaches: Personalized Federated Learning with Neural Architecture Search in Edge Computing Journal Article
In: IEEE Transactions on Mobile Computing, no. 01, pp. 1-17, 5555, ISSN: 1558-0660.
@article{10460163,
title = {Peaches: Personalized Federated Learning with Neural Architecture Search in Edge Computing},
author = {J. Yan and J. Liu and H. Xu and Z. Wang and C. Qiao},
doi = {10.1109/TMC.2024.3373506},
issn = {1558-0660},
year = {5555},
date = {5555-03-01},
urldate = {5555-03-01},
journal = {IEEE Transactions on Mobile Computing},
number = {01},
pages = {1-17},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {In edge computing (EC), federated learning (FL) enables numerous distributed devices (or workers) to collaboratively train AI models without exposing their local data. Most works of FL adopt a predefined architecture on all participating workers for model training. However, since workers' local data distributions vary heavily in EC, the predefined architecture may not be the optimal choice for every worker. It is also unrealistic to manually design a high-performance architecture for each worker, which requires intense human expertise and effort. In order to tackle this challenge, neural architecture search (NAS) has been applied in FL to automate the architecture design process. Unfortunately, the existing federated NAS frameworks often suffer from the difficulties of system heterogeneity and resource limitation. To remedy this problem, we present a novel framework, termed Peaches, to achieve efficient searching and training in the resource-constrained EC system. Specifically, the local model of each worker is stacked by base cell and personal cell, where the base cell is shared by all workers to capture the common knowledge and the personal cell is customized for each worker to fit the local data. We determine the number of base cells, shared by all workers, according to the bandwidth budget on the parameters server. Besides, to relieve the data and system heterogeneity, we find the optimal number of personal cells for each worker based on its computing capability. In addition, we gradually prune the search space during training to mitigate the resource consumption. We evaluate the performance of Peaches through extensive experiments, and the results show that Peaches can achieve an average accuracy improvement of about 6.29% and up to 3.97× speed up compared with the baselines.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Sun, Genchen; Liu, Zhengkun; Gan, Lin; Su, Hang; Li, Ting; Zhao, Wenfeng; Sun, Biao
SpikeNAS-Bench: Benchmarking NAS Algorithms for Spiking Neural Network Architecture Journal Article
In: IEEE Transactions on Artificial Intelligence, vol. 1, no. 01, pp. 1-12, 5555, ISSN: 2691-4581.
@article{10855683,
title = { SpikeNAS-Bench: Benchmarking NAS Algorithms for Spiking Neural Network Architecture },
author = {Genchen Sun and Zhengkun Liu and Lin Gan and Hang Su and Ting Li and Wenfeng Zhao and Biao Sun},
url = {https://doi.ieeecomputersociety.org/10.1109/TAI.2025.3534136},
doi = {10.1109/TAI.2025.3534136},
issn = {2691-4581},
year = {5555},
date = {5555-01-01},
urldate = {5555-01-01},
journal = {IEEE Transactions on Artificial Intelligence},
volume = {1},
number = {01},
pages = {1-12},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {In recent years, Neural Architecture Search (NAS) has marked significant advancements, yet its efficacy is marred by the dependence on substantial computational resources. To mitigate this, the development of NAS benchmarks has emerged, offering datasets that enumerate all potential network architectures and their performances within a predefined search space. Nonetheless, these benchmarks predominantly focus on convolutional architectures, which are criticized for their limited interpretability and suboptimal hardware efficiency. Recognizing the untapped potential of Spiking Neural Networks (SNNs) — often hailed as the third generation of neural networks for their biological realism and computational thrift — this study introduces SpikeNAS-Bench. As a pioneering benchmark for SNN, SpikeNAS-Bench utilizes a cell-based search space, integrating leaky integrate-and-fire (LIF) neurons with variable thresholds as candidate operations. It encompasses 15,625 candidate architectures, rigorously evaluated on CIFAR10, CIFAR100 and Tiny-ImageNet datasets. This paper delves into the architectural nuances of SpikeNAS-Bench, leveraging various criteria to underscore the benchmark’s utility and presenting insights that could steer future NAS algorithm designs. Moreover, we assess the benchmark’s consistency through three distinct proxy types: zero-cost-based, early-stop-based, and predictor-based proxies. Additionally, the paper benchmarks seven contemporary NAS algorithms to attest to SpikeNAS-Bench’s broad applicability. We commit to providing training logs, diagnostic data for all candidate architectures, and the promise to release all code and datasets post-acceptance, aiming to catalyze further exploration and innovation within the SNN domain. SpikeNAS-Bench is open source at https://github.com/XXX (hidden for double anonymous review).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Changlin; Lin, Sihao; Tang, Tao; Wang, Guangrun; Li, Mingjie; Li, Zhihui; Chang, Xiaojun
BossNAS Family: Block-wisely Self-supervised Neural Architecture Search Journal Article
In: IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 01, pp. 1-15, 5555, ISSN: 1939-3539.
@article{10839629,
title = { BossNAS Family: Block-wisely Self-supervised Neural Architecture Search },
author = {Changlin Li and Sihao Lin and Tao Tang and Guangrun Wang and Mingjie Li and Zhihui Li and Xiaojun Chang},
url = {https://doi.ieeecomputersociety.org/10.1109/TPAMI.2025.3529517},
doi = {10.1109/TPAMI.2025.3529517},
issn = {1939-3539},
year = {5555},
date = {5555-01-01},
urldate = {5555-01-01},
journal = {IEEE Transactions on Pattern Analysis & Machine Intelligence},
number = {01},
pages = {1-15},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Recent advances in hand-crafted neural architectures for visual recognition underscore the pressing need to explore architecture designs comprising diverse building blocks. Concurrently, neural architecture search (NAS) methods have gained traction as a means to alleviate human efforts. Nevertheless, the question of whether NAS methods can efficiently and effectively manage diversified search spaces featuring disparate candidates, such as Convolutional Neural Networks (CNNs) and transformers, remains an open question. In this work, we introduce a novel unsupervised NAS approach called BossNAS (Block-wisely Self-supervised Neural Architecture Search), which aims to address the problem of inaccurate predictive architecture ranking caused by a large weight-sharing space while mitigating potential ranking issue caused by biased supervision. To achieve this, we factorize the search space into blocks and introduce a novel self-supervised training scheme called Ensemble Bootstrapping, to train each block separately in an unsupervised manner. In the search phase, we propose an unsupervised Population-Centric Search, optimizing the candidate architecture towards the population center. Additionally, we enhance our NAS method by integrating masked image modeling and present BossNAS++ to overcome the lack of dense supervision in our block-wise self-supervised NAS. In BossNAS++, we introduce the training technique named Masked Ensemble Bootstrapping for block-wise supernet, accompanied by a Masked Population-Centric Search scheme to promote fairer architecture selection. Our family of models, discovered through BossNAS and BossNAS++, delivers impressive results across various search spaces and datasets. Our transformer model discovered by BossNAS++ attains a remarkable accuracy of 83.2% on ImageNet with only 10.5B MAdds, surpassing DeiT-B by 1.4% while maintaining a lower computation cost. Moreover, our approach excels in architecture rating accuracy, achieving Spearman correlations of 0.78 and 0.76 on the canonical MBConv search space with ImageNet and the NATS-Bench size search space with CIFAR-100, respectively, outperforming state-of-the-art NAS methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2025
Wang, Weiqi; Bao, Feilong; Xing, Zhecong; Lian, Zhe
A Survey: Research Progress of Feature Fusion Technology Journal Article
In: 2025.
@article{wangsurvey,
title = {A Survey: Research Progress of Feature Fusion Technology},
author = {Weiqi Wang and Feilong Bao and Zhecong Xing and Zhe Lian},
url = {http://poster-openaccess.com/files/ICIC2024/862.pdf},
year = {2025},
date = {2025-12-01},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
(Ed.)
MACHINE-GENERATED NEURAL NETWORKS FOR SHORT-TERM LOAD FORECASTING Collection
2025.
@collection{nokey,
title = { MACHINE-GENERATED NEURAL NETWORKS FOR SHORT-TERM LOAD FORECASTING},
author = {Gergana Vacheva and Plamen Stanchev and Nikolay Hinov
},
url = {https://unitechsp.tugab.bg/images/2024/1-EE/s1_p143_v1.pdf},
year = {2025},
date = {2025-12-01},
urldate = {2025-12-01},
booktitle = {International Scientific Conference UNITECH`2024},
journal = {International Scientific Conference UNITECH`2024},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
Huang, Tao
Efficient Deep Neural Architecture Design and Training PhD Thesis
2025.
@phdthesis{nokey,
title = {Efficient Deep Neural Architecture Design and Training},
author = { Huang, Tao },
url = {https://ses.library.usyd.edu.au/handle/2123/33598},
year = {2025},
date = {2025-02-01},
urldate = {2025-02-01},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Herterich, Nils; Liu, Kai; Stein, Anthony
Multi-objective neural architecture search for real-time weed detection on embedded system Miscellaneous
2025.
@misc{Herterich,
title = {Multi-objective neural architecture search for real-time weed detection on embedded system},
author = {Nils Herterich and Kai Liu and Anthony Stein},
url = {https://dl.gi.de/server/api/core/bitstreams/29a49f8d-304e-4073-8a92-4bef6483c087/content},
year = {2025},
date = {2025-02-01},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
Tabak, Gabriel Couto; Molenaar, Dylan; Curi, Mariana
An evolutionary neural architecture search for item response theory autoencoders Journal Article
In: Behaviormetrika , 2025.
@article{nokey,
title = {An evolutionary neural architecture search for item response theory autoencoders},
author = {Gabriel Couto Tabak and Dylan Molenaar and Mariana Curi
},
url = {https://link.springer.com/article/10.1007/s41237-024-00250-5},
year = {2025},
date = {2025-01-27},
urldate = {2025-01-27},
journal = {Behaviormetrika },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Hao, Debei; Pei, Songwei
MIG-DARTS: towards effective differentiable architecture search by gradually mitigating the initial-channel gap between search and evaluation Journal Article
In: Neural Computing and Applications, 2025.
@article{nokey,
title = {MIG-DARTS: towards effective differentiable architecture search by gradually mitigating the initial-channel gap between search and evaluation},
author = {
Debei Hao and Songwei Pei
},
url = {https://link.springer.com/article/10.1007/s00521-024-10681-6},
year = {2025},
date = {2025-01-09},
urldate = {2025-01-09},
journal = {Neural Computing and Applications},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
(Ed.)
2025.
@collection{nokey,
title = {H4H: Hybrid Convolution-Transformer Architecture Search for NPU-CIM Heterogeneous Systems for AR/VR Applications},
author = {Yiwei Zhao and Jinhui Chen and Sai Qian Zhang and Syed Shakib Sarwar and Kleber Hugo Stangherlin and Jorge Tomas Gomez and Jae-Sun Seo and Barbara De Salvo and Chiao Liu and Phillip B. Gibbons and Ziyun Li},
url = {https://www.pdl.cmu.edu/PDL-FTP/associated/ASP-DAC2025-1073-12.pdf},
year = {2025},
date = {2025-01-02},
urldate = {2025-01-02},
booktitle = {ASPDAC ’25},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
Solís-Martín, David; Galán-Páez, Juan; Borrego-Díaz, Joaquín
A Model for Learning-Curve Estimation in Efficient Neural Architecture Search and Its Application in Predictive Health Maintenance Journal Article
In: Mathematics, vol. 13, no. 4, 2025, ISSN: 2227-7390.
@article{math13040555,
title = {A Model for Learning-Curve Estimation in Efficient Neural Architecture Search and Its Application in Predictive Health Maintenance},
author = {David Solís-Martín and Juan Galán-Páez and Joaquín Borrego-Díaz},
url = {https://www.mdpi.com/2227-7390/13/4/555},
doi = {10.3390/math13040555},
issn = {2227-7390},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Mathematics},
volume = {13},
number = {4},
abstract = {A persistent challenge in machine learning is the computational inefficiency of neural architecture search (NAS), particularly in resource-constrained domains like predictive maintenance. This work introduces a novel learning-curve estimation framework that reduces NAS computational costs by over 50% while maintaining model performance, addressing a critical bottleneck in automated machine learning design. By developing a data-driven estimator trained on 62 different predictive maintenance datasets, we demonstrate a generalized approach to early-stopping trials during neural network optimization. Our methodology not only reduces computational resources but also provides a transferable technique for efficient neural network architecture exploration across complex industrial monitoring tasks. The proposed approach achieves a remarkable balance between computational efficiency and model performance, with only a 2% performance degradation, showcasing a significant advancement in automated neural architecture optimization strategies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Liu, Guangyuan; Li, Yangyang; Chen, Yanqiao; Shang, Ronghua; Jiao, Licheng
AutoPolCNN: A neural architecture search method of convolutional neural network for PolSAR image classification Journal Article
In: Knowledge-Based Systems, vol. 312, pp. 113122, 2025, ISSN: 0950-7051.
@article{LIU2025113122,
title = {AutoPolCNN: A neural architecture search method of convolutional neural network for PolSAR image classification},
author = {Guangyuan Liu and Yangyang Li and Yanqiao Chen and Ronghua Shang and Licheng Jiao},
url = {https://www.sciencedirect.com/science/article/pii/S0950705125001698},
doi = {https://doi.org/10.1016/j.knosys.2025.113122},
issn = {0950-7051},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Knowledge-Based Systems},
volume = {312},
pages = {113122},
abstract = {Convolutional neural networks (CNNs), as a kind of typical classification model known for good performance, have been utilized to cope with polarimetric synthetic aperture radar (PolSAR) image classification. Nevertheless, the performances of CNNs highly rely on well-designed network architectures and there is no theoretical guarantee on how to design them. As a result, the architectures of CNNs can be only designed by human experts or by trial and error, which makes the architecture design is annoying and time-consuming. So, a neural architecture search (NAS) method of CNN called AutoPolCNN, which can determine the architecture automatically, is proposed in this paper. Specifically, we firstly design the search space which covers the main components of CNNs like convolution and pooling operators. Secondly, considering the fact that the number of layers can also influence the performance of CNN, we propose a super normal module (SNM), which can dynamically adjust the number of network layers according to different datasets in the search stage. Finally, we develop the loss function and the search method for the designed search space. Via AutoPolCNN, preparing the data and waiting for the classification results are enough. Experiments carried out on three PolSAR datasets prove that the architecture can be automatically determined by AutoPolCNN within an hour (at least 10 times faster than existing NAS methods) and has higher overall accuracy (OA) than state-of-the-art (SOTA) PolSAR image classification CNN models.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Deevi, Sri Aditya; Mishra, Asish Kumar; Mishra, Deepak; L, Ravi Kumar; P, Bharat Kumar G V; G, Murali Krishna Bhagavan
Efficient Self-Supervised Neural Architecture Search Proceedings Article
In: 2025 19th International Conference on Ubiquitous Information Management and Communication (IMCOM), pp. 1-8, 2025.
@inproceedings{10857490,
title = {Efficient Self-Supervised Neural Architecture Search},
author = {Sri Aditya Deevi and Asish Kumar Mishra and Deepak Mishra and Ravi Kumar L and Bharat Kumar G V P and Murali Krishna Bhagavan G},
url = {https://ieeexplore.ieee.org/abstract/document/10857490},
doi = {10.1109/IMCOM64595.2025.10857490},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 19th International Conference on Ubiquitous Information Management and Communication (IMCOM)},
pages = {1-8},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Siddiqui, Shahid; Kyrkou, Christos; Theocharides, Theocharis
Efficient Global Neural Architecture Search Technical Report
2025.
@techreport{siddiqui2025efficientglobalneuralarchitecture,
title = {Efficient Global Neural Architecture Search},
author = {Shahid Siddiqui and Christos Kyrkou and Theocharis Theocharides},
url = {https://arxiv.org/abs/2502.03553},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Zhao, Xinlong; Sun, Jiande; Zhang, Jia; Hou, Sujuan; Li, Shuai; Liu, Tong; Liu, Ke
PerfSeer: An Efficient and Accurate Deep Learning Models Performance Predictor Technical Report
2025.
@techreport{zhao2025perfseerefficientaccuratedeep,
title = {PerfSeer: An Efficient and Accurate Deep Learning Models Performance Predictor},
author = {Xinlong Zhao and Jiande Sun and Jia Zhang and Sujuan Hou and Shuai Li and Tong Liu and Ke Liu},
url = {https://arxiv.org/abs/2502.01206},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Li, Jianzhao; Wang, Shanfeng; Yang, Rui; Gong, Maoguo; Hu, Zhuping; Zhang, Ning; Sheng, Kai; Zhou, Yu
Towards Federated Customized Neural Architecture Search for Remote Sensing Scene Classification Journal Article
In: IEEE Transactions on Geoscience and Remote Sensing, pp. 1-1, 2025.
@article{10858749,
title = {Towards Federated Customized Neural Architecture Search for Remote Sensing Scene Classification},
author = {Jianzhao Li and Shanfeng Wang and Rui Yang and Maoguo Gong and Zhuping Hu and Ning Zhang and Kai Sheng and Yu Zhou},
url = {https://ieeexplore.ieee.org/abstract/document/10858749},
doi = {10.1109/TGRS.2025.3537085},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Geoscience and Remote Sensing},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Banerjee, Somnath
Neural Architecture Search Based Deepfake Detection Model using YOLO Journal Article
In: International Journal of Advanced Research in Science, Communication and Technology, vol. 5, no. 1, pp. 375 - 383, 2025.
@article{banerjee:hal-04901372,
title = {Neural Architecture Search Based Deepfake Detection Model using YOLO},
author = {Somnath Banerjee},
url = {https://hal.science/hal-04901372},
doi = {10.48175/ijarsct-22938},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = { International Journal of Advanced Research in Science, Communication and Technology},
volume = {5},
number = {1},
pages = {375 - 383},
publisher = {Naksh Solutions},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Errica, Federico; Christiansen, Henrik; Zaverkin, Viktor; Niepert, Mathias; Alesiani, Francesco
Adaptive Width Neural Networks Technical Report
2025.
@techreport{errica2025adaptivewidthneuralnetworks,
title = {Adaptive Width Neural Networks},
author = {Federico Errica and Henrik Christiansen and Viktor Zaverkin and Mathias Niepert and Francesco Alesiani},
url = {https://arxiv.org/abs/2501.15889},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Sander, Jacob; Cohen, Achraf; Dasari, Venkat R.; Venable, Brent; Jalaian, Brian
On Accelerating Edge AI: Optimizing Resource-Constrained Environments Technical Report
2025.
@techreport{sander2025acceleratingedgeaioptimizing,
title = {On Accelerating Edge AI: Optimizing Resource-Constrained Environments},
author = {Jacob Sander and Achraf Cohen and Venkat R. Dasari and Brent Venable and Brian Jalaian},
url = {https://arxiv.org/abs/2501.15014},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Tu, Xiaolong; Chen, Dawei; Han, Kyungtae; Altintas, Onur; Wang, Haoxin
GreenAuto: An Automated Platform for Sustainable AI Model Design on Edge Devices Technical Report
2025.
@techreport{tu2025greenautoautomatedplatformsustainable,
title = {GreenAuto: An Automated Platform for Sustainable AI Model Design on Edge Devices},
author = {Xiaolong Tu and Dawei Chen and Kyungtae Han and Onur Altintas and Haoxin Wang},
url = {https://arxiv.org/abs/2501.14995},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Muñoz, J. Pablo; Yuan, Jinjie; Jain, Nilesh
Low-Rank Adapters Meet Neural Architecture Search for LLM Compression Technical Report
2025.
@techreport{muñoz2025lowrankadaptersmeetneural,
title = {Low-Rank Adapters Meet Neural Architecture Search for LLM Compression},
author = {J. Pablo Muñoz and Jinjie Yuan and Nilesh Jain},
url = {https://arxiv.org/abs/2501.16372},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Thudumu, Srikanth; Nguyen, Hy; Du, Hung; Duong, Nhat; Rasool, Zafaryab; Logothetis, Rena; Barnett, Scott; Vasa, Rajesh; Mouzakis, Kon
The M-factor: A Novel Metric for Evaluating Neural Architecture Search in Resource-Constrained Environments Technical Report
2025.
@techreport{thudumu2025mfactornovelmetricevaluating,
title = {The M-factor: A Novel Metric for Evaluating Neural Architecture Search in Resource-Constrained Environments},
author = {Srikanth Thudumu and Hy Nguyen and Hung Du and Nhat Duong and Zafaryab Rasool and Rena Logothetis and Scott Barnett and Rajesh Vasa and Kon Mouzakis},
url = {https://arxiv.org/abs/2501.17361},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Xu, Bo; Xie, Qiujie; Zhou, Jiahui; Zong, Linlin
Triple Path Enhanced Neural Architecture Search for Multimodal Fake News Detection Technical Report
2025.
@techreport{xu2025triplepathenhancedneural,
title = {Triple Path Enhanced Neural Architecture Search for Multimodal Fake News Detection},
author = {Bo Xu and Qiujie Xie and Jiahui Zhou and Linlin Zong},
url = {https://arxiv.org/abs/2501.14455},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Wang, Yifan; Zhong, Linlin
NAS-PINNv2: Improved neural architecture search framework for physics-informed neural networks in low-temperature plasma simulation Technical Report
2025.
@techreport{wang2025naspinnv2improvedneuralarchitecture,
title = {NAS-PINNv2: Improved neural architecture search framework for physics-informed neural networks in low-temperature plasma simulation},
author = {Yifan Wang and Linlin Zhong},
url = {https://arxiv.org/abs/2501.15160},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Xiong, Jianghui; Gu, Shangfeng; Rao, Yuan; Zhang, Xiaodan; Wu, Yuting; Lu, Jie; Jin, Xiu
An innovative fusion method with micro-vision and spectrum of wheat for detecting asymptomatic Fusarium head blight Journal Article
In: Journal of Food Composition and Analysis, vol. 140, pp. 107258, 2025, ISSN: 0889-1575.
@article{XIONG2025107258,
title = {An innovative fusion method with micro-vision and spectrum of wheat for detecting asymptomatic Fusarium head blight},
author = {Jianghui Xiong and Shangfeng Gu and Yuan Rao and Xiaodan Zhang and Yuting Wu and Jie Lu and Xiu Jin},
url = {https://www.sciencedirect.com/science/article/pii/S0889157525000729},
doi = {https://doi.org/10.1016/j.jfca.2025.107258},
issn = {0889-1575},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Journal of Food Composition and Analysis},
volume = {140},
pages = {107258},
abstract = {Fusarium head blight (FHB) poses a significant threat to global wheat health and seriously affects the quality of the wheat and its products. Therefore, detection of early FHB infection in wheat is crucial for preventing its rapid spread and ensuring food safety. This study proposed an innovative fusion method for detecting the severity of FHB invasion in wheat based on near-infrared spectroscopy and microscopic visual images. This method concatenated 512 features from near-infrared spectra and microscopic visual images of wheat and used neural architecture search (NAS) to build a model for fused features to achieve accurate classification of the degree of infection caused by pathogens in wheat, with accuracy of 90.60 % and F1-score of 90.95 %. This represented significant improvements of 20.80 % and 21.79 % over single spectral data modelling and 11.41 % and 12.67 % over single image data modelling, respectively. The study results showed that this method enables more accurate and non-destructive detection of FHB in wheat, providing a solution for the early identification of potential fungal diseases, which is valuable for improving the quality and yield of wheat.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Ouertatani, H.; Maxim, C.; Niar, S.; Talbi, E-G.
Neural Architecture Tuning: A BO-Powered NAS Tool Proceedings Article
In: Dorronsoro, Bernabé; Zagar, Martin; Talbi, El-Ghazali (Ed.): Optimization and Learning, pp. 82–93, Springer Nature Switzerland, Cham, 2025, ISBN: 978-3-031-77941-1.
@inproceedings{10.1007/978-3-031-77941-1_7,
title = {Neural Architecture Tuning: A BO-Powered NAS Tool},
author = {H. Ouertatani and C. Maxim and S. Niar and E-G. Talbi},
editor = {Bernabé Dorronsoro and Martin Zagar and El-Ghazali Talbi},
url = {https://link.springer.com/chapter/10.1007/978-3-031-77941-1_7},
isbn = {978-3-031-77941-1},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Optimization and Learning},
pages = {82–93},
publisher = {Springer Nature Switzerland},
address = {Cham},
abstract = {Neural Architecture Search (NAS) consists of applying an optimization technique to find the best performing architecture(s) in a defined search space, with regard to an objective function. The practical implementation of NAS currently carries certain limitations, including prohibitive costs with the need for a large number of evaluations, an inflexibility in defining the search space by often having to select from a limited set of possible design components, and a difficulty of integrating existing architecture code by requiring a specialized design language for search space specification. We propose a simplified search tool, with efficiency in the number of evaluations needed to achieve good results, and flexibility by design, allowing for an easy and open definition of the search space and objective function. Interoperability with existing code or newly released architectures from the literature allows the user to quickly and easily tune architectures to produce well-performing solutions tailor-made for particular use cases. We practically apply this tool to certain vision search spaces, and showcase its effectiveness.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Frachon, Luc J.
Novel approaches in macro-level neural ensemble architecture search PhD Thesis
2025.
@phdthesis{nokey,
title = {Novel approaches in macro-level neural ensemble architecture search},
author = {Frachon, Luc J.},
url = {https://www.ros.hw.ac.uk/handle/10399/5040},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Bastami, Sajad; Dolatshahi, Mohammad Bagher
Compact Neural Architecture Search for Image Classification Using Gravitational Search Algorithm Journal Article
In: Applied and basic Machine intelligence research, pp. 77-91, 2025, ISSN: 2821-2029.
@article{nokey,
title = {Compact Neural Architecture Search for Image Classification Using Gravitational Search Algorithm},
author = {Sajad Bastami and Mohammad Bagher Dolatshahi},
url = {https://abmir.yazd.ac.ir/article_3644.html},
doi = {10.22034/abmir.2024.22228.1066},
issn = {2821-2029},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Applied and basic Machine intelligence research},
pages = {77-91},
publisher = {Yazd University},
abstract = {This paper presents a compact neural architecture search method for image classification using the Gravitational Search Algorithm (GSA). Deep learning, through multi-layer computational models, enables automatic feature extraction from raw data at various levels of abstraction, playing a key role in complex tasks such as image classification. Neural Architecture Search (NAS), which automatically discovers new architectures for Convolutional Neural Networks (CNNs), faces challenges such as high computational complexity and costs. To address these issues, a GSA-based approach has been developed, employing a bi-level variable-length optimization technique to design both micro and macro architectures of CNNs. This approach, leveraging a compact search space and modified convolutional bottlenecks, demonstrates superior performance compared to state-of-the-art methods. Experimental results on CIFAR-10, CIFAR-100, and ImageNet datasets reveal that the proposed method achieves a classification accuracy of 98.48% with a search cost of 1.05 GPU days, outperforming existing algorithms in terms of accuracy, search efficiency, and architectural complexity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Deutel, Mark; Kontes, Georgios; Mutschler, Christopher; Teich, Jürgen
Combining Multi-Objective Bayesian Optimization with Reinforcement Learning for TinyML Journal Article
In: ACM Trans. Evol. Learn. Optim., 2025, (Just Accepted).
@article{10.1145/3715012,
title = {Combining Multi-Objective Bayesian Optimization with Reinforcement Learning for TinyML},
author = {Mark Deutel and Georgios Kontes and Christopher Mutschler and Jürgen Teich},
url = {https://doi.org/10.1145/3715012},
doi = {10.1145/3715012},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {ACM Trans. Evol. Learn. Optim.},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
abstract = {Deploying deep neural networks (DNNs) on microcontrollers (TinyML) is a common trend to process the increasing amount of sensor data generated at the edge, but in practice, resource and latency constraints make it difficult to find optimal DNN candidates. Neural architecture search (NAS) is an excellent approach to automate this search and can easily be combined with DNN compression techniques commonly used in TinyML. However, many NAS techniques are not only computationally expensive, especially hyperparameter optimization (HPO), but also often focus on optimizing only a single objective, e.g., maximizing accuracy, without considering additional objectives such as memory requirements or computational complexity of a DNN, which are key to making deployment at the edge feasible. In this paper, we propose a novel NAS strategy for TinyML based on multi-objective Bayesian optimization (MOBOpt) and an ensemble of competing parametric policies trained using Augmented Random Search (ARS) reinforcement learning (RL) agents. Our methodology aims at efficiently finding tradeoffs between a DNN’s predictive accuracy, memory requirements on a given target system, and computational complexity. Our experiments show that we consistently outperform existing MOBOpt approaches on different datasets and architectures such as ResNet-18 and MobileNetV3.},
note = {Just Accepted},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Douka, Stella; Verbockhaven, Manon; Rudkiewicz, Théo; Rivaud, Stéphane; Landes, François P; Chevallier, Sylvain; Charpiat, Guillaume
Growth strategies for arbitrary DAG neural architectures Technical Report
2025.
@techreport{douka2025growthstrategiesarbitrarydag,
title = {Growth strategies for arbitrary DAG neural architectures},
author = {Stella Douka and Manon Verbockhaven and Théo Rudkiewicz and Stéphane Rivaud and François P Landes and Sylvain Chevallier and Guillaume Charpiat},
url = {https://arxiv.org/abs/2501.12690},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Zada, Moustafa
Hybrid-Quantum Neural Architecture Search for The Proximal Policy Optimization Algorithm Technical Report
2025.
@techreport{https://doi.org/10.5281/zenodo.14625856,
title = {Hybrid-Quantum Neural Architecture Search for The Proximal Policy Optimization Algorithm},
author = {Moustafa Zada},
url = {https://zenodo.org/doi/10.5281/zenodo.14625856},
doi = {10.5281/ZENODO.14625856},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
publisher = {Zenodo},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Wu, Yue; Gong, Peiran; Yuan, Yongzhe; Gong, Maoguo; Ma, Wenping; Miao, Qiguang
Evolutionary Neural Architecture Search Framework With Masked Encoding Mechanism for Point Cloud Registration Journal Article
In: IEEE Transactions on Emerging Topics in Computational Intelligence, pp. 1-13, 2025.
@article{10847916,
title = {Evolutionary Neural Architecture Search Framework With Masked Encoding Mechanism for Point Cloud Registration},
author = {Yue Wu and Peiran Gong and Yongzhe Yuan and Maoguo Gong and Wenping Ma and Qiguang Miao},
url = {https://ieeexplore.ieee.org/abstract/document/10847916},
doi = {10.1109/TETCI.2025.3526279},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Emerging Topics in Computational Intelligence},
pages = {1-13},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Youm, Sungkwan; Go, Sunghyun
Lightweight and Efficient CSI-Based Human Activity Recognition via Bayesian Optimization-Guided Architecture Search and Structured Pruning Journal Article
In: Applied Sciences, vol. 15, no. 2, 2025, ISSN: 2076-3417.
@article{app15020890,
title = {Lightweight and Efficient CSI-Based Human Activity Recognition via Bayesian Optimization-Guided Architecture Search and Structured Pruning},
author = {Sungkwan Youm and Sunghyun Go},
url = {https://www.mdpi.com/2076-3417/15/2/890},
doi = {10.3390/app15020890},
issn = {2076-3417},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Applied Sciences},
volume = {15},
number = {2},
abstract = {This paper presents an integrated approach to developing lightweight, high-performance deep learning models for human activity recognition (HAR) using WiFi Channel State Information (CSI). Motivated by the need for accuracy and efficiency in resource-constrained environments, we combine Bayesian Optimization-based Neural Architecture Search (NAS) with a structured pruning algorithm. NAS identifies optimal network configurations, while pruning systematically removes redundant parameters, preserving accuracy. This approach allows for robust activity recognition from diverse WiFi datasets under varying conditions. Experimental results across multiple benchmark datasets demonstrate that our method not only maintains but often improves accuracy after pruning, resulting in models that are both smaller and more accurate. This offers a scalable and adaptable solution for real-world deployments in IoT and mobile platforms, achieving an optimal balance of efficiency and accuracy in HAR using WiFi CSI.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Abebe, Waqwoya; Jafari, Sadegh; Yu, Sixing; Dutta, Akash; Strube, Jan; Tallent, Nathan R.; Guo, Luanzheng; Munoz, Pablo; Jannesari, Ali
SuperSAM: Crafting a SAM Supernetwork via Structured Pruning and Unstructured Parameter Prioritization Technical Report
2025.
@techreport{abebe2025supersamcraftingsamsupernetwork,
title = {SuperSAM: Crafting a SAM Supernetwork via Structured Pruning and Unstructured Parameter Prioritization},
author = {Waqwoya Abebe and Sadegh Jafari and Sixing Yu and Akash Dutta and Jan Strube and Nathan R. Tallent and Luanzheng Guo and Pablo Munoz and Ali Jannesari},
url = {https://arxiv.org/abs/2501.08504},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Slimani, Hicham; Mhamdi, Jamal El; Jilbab, Abdelilah
Advanced Algorithmic Model for Real-Time Multi-Level Crop Disease Detection Using Neural Architecture Search Journal Article
In: E3S Web Conf., vol. 601, pp. 00032, 2025.
@article{refId0c,
title = {Advanced Algorithmic Model for Real-Time Multi-Level Crop Disease Detection Using Neural Architecture Search},
author = {Hicham Slimani and Jamal El Mhamdi and Abdelilah Jilbab},
url = {https://doi.org/10.1051/e3sconf/202560100032},
doi = {10.1051/e3sconf/202560100032},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {E3S Web Conf.},
volume = {601},
pages = {00032},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Jiang, Pengcheng; Xue, Yu; Neri, Ferrante
Score Predictor-Assisted Evolutionary Neural Architecture Search Journal Article
In: IEEE Transactions on Emerging Topics in Computational Intelligence, pp. 1-15, 2025.
@article{10841460,
title = {Score Predictor-Assisted Evolutionary Neural Architecture Search},
author = {Pengcheng Jiang and Yu Xue and Ferrante Neri},
url = {https://ieeexplore.ieee.org/abstract/document/10841460},
doi = {10.1109/TETCI.2025.3526179},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Emerging Topics in Computational Intelligence},
pages = {1-15},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Nie, Mingshuo; Chen, Dongming; Chen, Huilin; Wang, Dongqi
AutoMTNAS: Automated meta-reinforcement learning on graph tokenization for graph neural architecture search Journal Article
In: Knowledge-Based Systems, vol. 310, pp. 113023, 2025, ISSN: 0950-7051.
@article{NIE2025113023,
title = {AutoMTNAS: Automated meta-reinforcement learning on graph tokenization for graph neural architecture search},
author = {Mingshuo Nie and Dongming Chen and Huilin Chen and Dongqi Wang},
url = {https://www.sciencedirect.com/science/article/pii/S0950705125000711},
doi = {https://doi.org/10.1016/j.knosys.2025.113023},
issn = {0950-7051},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Knowledge-Based Systems},
volume = {310},
pages = {113023},
abstract = {Graph neural networks have achieved breakthroughs in various fields due to their powerful automated representation capabilities for graph. Designing effective graph neural architectures is critical for feature representation and property prediction in non-Euclidean graph-structured data. However, this design process heavily relies on the strong prior knowledge and experience of researchers. The inherent complexity and irregularity in graph-structured data make it challenging for existing methods to develop strategies for capturing expressive representations beyond traditional paradigms, resulting in unaffordable computational cost and precision loss across diverse graphs. To this end, we propose a novel automated meta-reinforcement learning on graph tokenization for graph neural architecture search, named AutoMTNAS, to learn a more general and reliable architecture search policy. In particular, our graph tokenization method identifies critical nodes and structural patterns within the graph and captures label-aware global information to summarize potential valuable insights. We define a meta-reinforcement learning searcher that utilizes parameter sharing and policy gradients to discover optimal architectures for new tasks, even with limited available observations. Extensive experiments on benchmark datasets, ranging from small to large, demonstrate that AutoMTNAS outperforms human-invented architectures and existing graph neural architecture search methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Louati, Hassen; Louati, Ali; Mansour, Khalid; Kariri, Elham
Achieving Faster and Smarter Chest X-Ray Classification With Optimized CNNs Journal Article
In: IEEE Access, vol. 13, pp. 10070–10082, 2025.
@article{DBLP:journals/access/LouatiLMK25,
title = {Achieving Faster and Smarter Chest X-Ray Classification With Optimized CNNs},
author = {Hassen Louati and Ali Louati and Khalid Mansour and Elham Kariri},
url = {https://doi.org/10.1109/ACCESS.2025.3529206},
doi = {10.1109/ACCESS.2025.3529206},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Access},
volume = {13},
pages = {10070–10082},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Song, Xiaotian; Lv, Zeqiong; Fan, Jiaohao; Deng, Xiong; Lv, Jiancheng; Liu, Jiyuan; Sun, Yanan
Evolutionary Multi-Objective Spiking Neural Architecture Search for Image Classification Journal Article
In: IEEE Transactions on Evolutionary Computation, pp. 1-1, 2025.
@article{10838601,
title = {Evolutionary Multi-Objective Spiking Neural Architecture Search for Image Classification},
author = {Xiaotian Song and Zeqiong Lv and Jiaohao Fan and Xiong Deng and Jiancheng Lv and Jiyuan Liu and Yanan Sun},
doi = {10.1109/TEVC.2025.3528471},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Evolutionary Computation},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Weitz, Jason; Demler, Dmitri; McDermott, Luke; Tran, Nhan; Duarte, Javier
Neural Architecture Codesign for Fast Physics Applications Technical Report
2025.
@techreport{weitz2025neuralarchitecturecodesignfast,
title = {Neural Architecture Codesign for Fast Physics Applications},
author = {Jason Weitz and Dmitri Demler and Luke McDermott and Nhan Tran and Javier Duarte},
url = {https://arxiv.org/abs/2501.05515},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Saeed, Farah; Tan, Chenjiao; Liu, Tianming; Li, Changying
3D neural architecture search to optimize segmentation of plant parts Journal Article
In: Smart Agricultural Technology, vol. 10, pp. 100776, 2025, ISSN: 2772-3755.
@article{SAEED2025100776,
title = {3D neural architecture search to optimize segmentation of plant parts},
author = {Farah Saeed and Chenjiao Tan and Tianming Liu and Changying Li},
url = {https://www.sciencedirect.com/science/article/pii/S2772375525000103},
doi = {https://doi.org/10.1016/j.atech.2025.100776},
issn = {2772-3755},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Smart Agricultural Technology},
volume = {10},
pages = {100776},
abstract = {Accurately segmenting plant parts from imagery is vital for improving crop phenotypic traits. However, current 3D deep learning models for segmentation in point cloud data require specific network architectures that are usually manually designed, which is both tedious and suboptimal. To overcome this issue, a 3D neural architecture search (NAS) was performed in this study to optimize cotton plant part segmentation. The search space was designed using Point Voxel Convolution (PVConv) as the basic building block of the network. The NAS framework included a supernetwork with weight sharing and an evolutionary search to find optimal candidates, with three surrogate learners to predict mean IoU, latency, and memory footprint. The optimal candidate searched from the proposed method consisted of five PVConv layers with either 32 or 512 output channels, achieving mean IoU and accuracy of over 90 % and 96 %, respectively, and outperforming manually designed architectures. Additionally, the evolutionary search was updated to search for architectures satisfying memory and time constraints, with searched architectures achieving mean IoU and accuracy of >84 % and 94 %, respectively. Furthermore, a differentiable architecture search (DARTS) utilizing PVConv operation was implemented for comparison, and our method demonstrated better segmentation performance with a margin of >2 % and 1 % in mean IoU and accuracy, respectively. Overall, the proposed method can be applied to segment cotton plants with an accuracy over 94 %, while adjusting to available resource constraints.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yu, Jiandong; Li, Tongtong; Shi, Xuerong; Zhao, Ziyang; Chen, Miao; Zhang, Yu; Wang, Junyu; Yao, Zhijun; Fang, Lei; Hu, Bin
ETMO-NAS: An efficient two-step multimodal one-shot NAS for lung nodules classification Journal Article
In: Biomedical Signal Processing and Control, vol. 104, pp. 107479, 2025, ISSN: 1746-8094.
@article{YU2025107479,
title = {ETMO-NAS: An efficient two-step multimodal one-shot NAS for lung nodules classification},
author = {Jiandong Yu and Tongtong Li and Xuerong Shi and Ziyang Zhao and Miao Chen and Yu Zhang and Junyu Wang and Zhijun Yao and Lei Fang and Bin Hu},
url = {https://www.sciencedirect.com/science/article/pii/S1746809424015374},
doi = {https://doi.org/10.1016/j.bspc.2024.107479},
issn = {1746-8094},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Biomedical Signal Processing and Control},
volume = {104},
pages = {107479},
abstract = {Malignant lung nodules are the initial diagnostic manifestation of lung cancer. Accurate predictive classification of malignant from benign lung nodules can improve treatment efficacy and survival rate of lung cancer patients. Since current deep learning-based PET/CT pulmonary nodule-assisted diagnosis models typically rely on network architectures carefully designed by researchers, which require professional expertise and extensive prior knowledge. To combat these challenges, in this paper, we propose an efficient two-step multimodal one-shot NAS (ETMO-NAS) for searching high-performance network architectures for reliable and accurate classification of lung nodules for multimodal PET/CT data. Specifically, the step I focuses on fully training the performance of all candidate architectures in the search space using the sandwich rule and in-place distillation strategy. The step II aims to split the search space into multiple non-overlapping subsupernets by parallel operation edge decomposition strategy and then fine-tune the subsupernets further improve performance. Finally, the performance of ETMO-NAS was validated on a set of real clinical data. The experimental results show that the classification architecture searched by ETMO-NAS achieves the best performance with accuracy, precision, sensitivity, specificity, and F-1 score of 94.23%, 92.10%, 95.83%, 92.86% and 0.9388, respectively. In addition, compared with the classical CNN model and NAS model, ETMO-NAS performs better with the same inputs, but with only 1/33–1/5 of the parameters. This provides substantial evidence for the competitiveness of the model in classification tasks and presents a new approach for automated diagnosis of PET/CT pulmonary nodules. Code and models will be available at: https://github.com/yujiandong0002/ETMO-NAS.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yu, Caiyang; Wang, Jian; Wang, Yifan; Ju, Wei; Tang, Chenwei; Lv, Jiancheng
Rethinking neural architecture representation for predictors: Topological encoding in pixel space Journal Article
In: Information Fusion, vol. 118, pp. 102925, 2025, ISSN: 1566-2535.
@article{YU2025102925,
title = {Rethinking neural architecture representation for predictors: Topological encoding in pixel space},
author = {Caiyang Yu and Jian Wang and Yifan Wang and Wei Ju and Chenwei Tang and Jiancheng Lv},
url = {https://www.sciencedirect.com/science/article/pii/S1566253524007036},
doi = {https://doi.org/10.1016/j.inffus.2024.102925},
issn = {1566-2535},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Information Fusion},
volume = {118},
pages = {102925},
abstract = {Neural predictors (NPs) aim to swiftly evaluate architectures during the neural architecture search (NAS) process. Precise evaluations with NPs heavily depend on the representation of training samples (i.e., the architectures), as the representation determines how well the NP captures the intrinsic properties and intricate dependencies of the architecture. Existing methods, which represent neural architectures as graph structures or sequences, are inherently limited in their expressive capabilities. In this study, we explore the image representation of neural architecture, describing the architecture in pixel space and using the long-range modeling capability of attention mechanisms to construct connections among pixels and extract tangible (tractable) architecture topology representation from them. Our attempt provides an efficient architecture representation for NPs, combined with today’s powerful pre-training models, showing promising prospects. Furthermore, recognizing that images alone may fall short in capturing configuration specifics, we design a corresponding text representation to provide a more accurate complement. Our experimental analysis reveals that the existing visual language model can efficiently identify the topological information in the pixel space. Additionally, we propose a Dual-Input Multichannel Neural Predictor (DIMNP) that simultaneously accepts multiple representations of architectures, facilitating information complementarity and accelerating convergence of the NP. Extensive experiments on NAS-Bench-101, NAS-Bench-201, and DARTS datasets demonstrate the superiority of DIMNP compared to the state-of-the-art NPs. In particular, on the NAS-Bench-101 and NAS-Bench-201 search spaces, DIMNP achieves performance improvements of 0.01 and 0.52, respectively, compared to the second-best algorithm on average.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Luo, Zhirui; Li, Qingqing; Qi, Ruobin; Zheng, Jun
In: AI, vol. 6, no. 1, 2025, ISSN: 2673-2688.
@article{ai6010009,
title = {Designing Channel Attention Fully Convolutional Networks with Neural Architecture Search for Customer Socio-Demographic Information Identification Using Smart Meter Data},
author = {Zhirui Luo and Qingqing Li and Ruobin Qi and Jun Zheng},
url = {https://www.mdpi.com/2673-2688/6/1/9},
doi = {10.3390/ai6010009},
issn = {2673-2688},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {AI},
volume = {6},
number = {1},
abstract = {Background: Accurately identifying the socio-demographic information of customers is crucial for utilities. It enables them to efficiently deliver personalized energy services and manage distribution networks. In recent years, machine learning-based data-driven methods have gained popularity compared to traditional survey-based approaches, owing to their time and cost efficiency, as well as the availability of a large amount of high-frequency smart meter data. Methods: In this paper, we propose a new method that harnesses the power of neural architecture search to automatically design deep neural network architectures tailored for identifying various socio-demographic information of customers using smart meter data. We designed a search space based on a novel channel attention fully convolutional network architecture. Furthermore, we developed a search algorithm based on Bayesian optimization to effectively explore the space and identify high-performing architectures. Results: The performance of the proposed method was evaluated and compared with a set of machine learning and deep learning baseline methods using a smart meter dataset widely used in this research area. Our results show that the deep neural network architectures designed automatically by our proposed method significantly outperform all baseline methods in addressing the socio-demographic questions investigated in our study.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Archet, Agathe; Ventroux, Nicolas; Gac, Nicolas; Orieux, François
A practical HW-aware NAS flow for AI vision applications on embedded heterogeneous SoCs Proceedings Article
In: International Workshop on Design and Architecture for Signal and Image Processing 2025, Barcelone, Spain, 2025.
@inproceedings{archet:hal-04869471,
title = {A practical HW-aware NAS flow for AI vision applications on embedded heterogeneous SoCs},
author = {Agathe Archet and Nicolas Ventroux and Nicolas Gac and François Orieux},
url = {https://hal.science/hal-04869471},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {International Workshop on Design and Architecture for Signal and Image Processing 2025},
address = {Barcelone, Spain},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
He, Yannis Y.
TART: Token-based Architecture Transformer for Neural Network Performance Prediction Technical Report
2025.
@techreport{he2025tarttokenbasedarchitecturetransformer,
title = {TART: Token-based Architecture Transformer for Neural Network Performance Prediction},
author = {Yannis Y. He},
url = {https://arxiv.org/abs/2501.02007},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}