Auto Tiny Classifiers: Efficient Hardware Solutions for Tabular Data

In a recent article published in the journal Nature Electronics, researchers from the UK introduced a novel methodology called auto tiny classifiers. This approach automatically generates classifier circuits tailored for tabular data.

A classifier circuit as a hardware accelerator within a system. The system can be thought of as a single classification circuit unit, which leads to classification ‘guesses’. The prediction could be a single bit (binary classification) or a set of bits in the case of multiclass classification problems, which represent the encoding of the target class. Except for the actual classification circuit, the design uses buffers to hold the input and output data. The use of local buffers eliminates the data transfers within the system, keeping the required data close to the computation block. Image Credit: https://www.nature.com/articles/s41928-024-01157-5
A classifier circuit as a hardware accelerator within a system. The system can be thought of as a single classification circuit unit, which leads to classification ‘guesses’. The prediction could be a single bit (binary classification) or a set of bits in the case of multiclass classification problems, which represent the encoding of the target class. Except for the actual classification circuit, the design uses buffers to hold the input and output data. The use of local buffers eliminates the data transfers within the system, keeping the required data close to the computation block. Image Credit: https://www.nature.com/articles/s41928-024-01157-5

In particular, the innovation can offer comparable prediction performance to conventional machine learning techniques while using substantially fewer hardware resources and power. Additionally, the research demonstrated the practical application of the presented circuits as flexible integrated circuits, presenting a comparison with two established machine learning baseline models.

Background

Tabular data is utilized across various fields, including recommender systems, medical diagnosis, and smart packaging. However, these datasets pose challenges due to their heterogeneous nature, which involves the combination of numerical and categorical data with weak correlations among features. This complexity presents obstacles for deep learning architectures, which excel at capturing spatial or semantic relationships typically found in image or speech data.

In machine learning development, a prevalent approach is to optimize performance during model training and then reduce the memory and area footprint of the trained model for deployment across diverse platforms, including processing cores, graphics processing units, microcontrollers, or custom hardware accelerators.

However, achieving a balance between performance and resource efficiency becomes difficult as machine learning models grow larger and more intricate. Moreover, implementing machine learning in hardware requires additional steps like translation, verification, and optimization, which may introduce errors and overheads into the process.

About the Research

The paper introduced auto tiny classifiers, a novel method for automatically generating classifier circuits directly from tabular data. Unlike traditional approaches, this method does not rely on pre-defined machine learning models or hardware circuits. These classifier circuits consist of only a few hundred logic gates yet achieve prediction accuracy comparable to state-of-the-art machine learning techniques such as gradient-boosted decision trees and deep neural networks.

The study employs an evolutionary algorithm to explore the logic gate space and generate classifier circuits that maximize training prediction accuracy. This algorithm emulates natural Darwinian evolution, with circuit fitness evaluated based on balanced accuracy. Termination occurs when validation accuracy fails to improve by a specified threshold within a generation window.

To validate their methodology, the researchers assessed it across 33 different tabular datasets, primarily sourced from OpenML, University of California, Irvine (UCI), and Kaggle repositories. Comparative evaluations were conducted against Google's TabNet architecture, AutoGluon (an AutoML system developed by Amazon), and other foundational machine learning models. Additionally, the authors designed tiny classifiers and baseline models in hardware, targeting both conventional silicon technology and flexible integrated circuits (FlexICs).

Findings

The outcomes revealed that across all datasets, AutoGluon XGBoost exhibited the highest average prediction accuracy at 81%, while tiny classifiers followed closely with a mean accuracy of 78%, marking the second-highest overall. Additionally, the analysis demonstrated that tiny classifiers boasted a low variance in the accuracy distribution, indicating their robustness to variations.

Synthesizing the tiny classifiers and baseline models using synopsis design compiler and targeting open 45 nm process design kit (PDK) silicon technology, the authors unveiled compelling insights. Specifically, they found that tiny classifier circuits consumed between 0.04-0.97 mW, with gate counts ranging from 11 to 426 NAND2-equivalent gates.

In contrast, multilayer perceptron’s (MLP) power consumption ranged from 34 to 38 mW, marking an 86–118 times increase over tiny classifiers, while area size was approximately 171 and 278 times larger for blood and led datasets, respectively, compared to tiny classifiers. Similarly, XGBoost exhibited approximately 3.9- and 8.0-times higher power consumption and 8.0- and 18.0-times larger area than tiny classifiers for blood and led datasets, respectively.

Furthermore, the authors implemented tiny classifiers and XGBoost as flexible chips using Pragmatic's 0.8 μm FlexIC metal-oxide thin-film transistor process. Their findings showcased the superior performance of tiny classifiers, which could be clocked 2-3 times faster, were 10-75 times smaller, and consumed lower power compared to XGBoost. Moreover, tiny classifiers exhibited a sixfold higher yield than XGBoost chips, suggesting lower production costs.

The new methodology could be used in a variety of applications, such as triggering circuits within system on chips, smart packages equipped with FlexICs, and near-sensor computing systems where inference is performed at the source. It is not limited to tabular data, and could be extended to other forms of data, such as time-series data, by using recurrent-graph-based genetic programming.

Conclusion

To sum up, an effective and adaptable novel approach was designed for automatically producing classifier circuits for tabular data. The technique could offer comparable prediction performance to conventional machine learning techniques while using fewer resources. Moreover, it showed that the methodology can generate low-cost and flexible chips that can be integrated into various applications.

Moving forward, the researchers acknowledged limitations and challenges and suggested directions for future work. They recommended exploring other fitness functions, such as the number of gates or power consumption, and using multi-objective graph-based genetic programming to search for the Pareto-optimal front of solutions and characterize the trade-off between objectives.

Journal reference:
Muhammad Osama

Written by

Muhammad Osama

Muhammad Osama is a full-time data analytics consultant and freelance technical writer based in Delhi, India. He specializes in transforming complex technical concepts into accessible content. He has a Bachelor of Technology in Mechanical Engineering with specialization in AI & Robotics from Galgotias University, India, and he has extensive experience in technical content writing, data science and analytics, and artificial intelligence.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Osama, Muhammad. (2024, May 02). Auto Tiny Classifiers: Efficient Hardware Solutions for Tabular Data. AZoAi. Retrieved on May 18, 2024 from https://www.azoai.com/news/20240502/Auto-Tiny-Classifiers-Efficient-Hardware-Solutions-for-Tabular-Data.aspx.

  • MLA

    Osama, Muhammad. "Auto Tiny Classifiers: Efficient Hardware Solutions for Tabular Data". AZoAi. 18 May 2024. <https://www.azoai.com/news/20240502/Auto-Tiny-Classifiers-Efficient-Hardware-Solutions-for-Tabular-Data.aspx>.

  • Chicago

    Osama, Muhammad. "Auto Tiny Classifiers: Efficient Hardware Solutions for Tabular Data". AZoAi. https://www.azoai.com/news/20240502/Auto-Tiny-Classifiers-Efficient-Hardware-Solutions-for-Tabular-Data.aspx. (accessed May 18, 2024).

  • Harvard

    Osama, Muhammad. 2024. Auto Tiny Classifiers: Efficient Hardware Solutions for Tabular Data. AZoAi, viewed 18 May 2024, https://www.azoai.com/news/20240502/Auto-Tiny-Classifiers-Efficient-Hardware-Solutions-for-Tabular-Data.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Machine Learning for Cell Lineage Classification in Microscopy