Directly to content
  1. Publishing |
  2. Search |
  3. Browse |
  4. Recent items rss |
  5. Open Access |
  6. Jur. Issues |
  7. DeutschClear Cookie - decide language by browser settings

Optimized Calibration for Analog Computations Targeting Deep Neural Networks on the Example of BrainScaleS-2

Kern, Eric Matthias

[thumbnail of Kern_Eric.pdf]
Preview
PDF, English
Download (1MB) | Terms of use

Citation of documents: Please do not cite the URL that is displayed in your browser location input, instead use the DOI, URN or the persistent URL below, as we can guarantee their long-time accessibility.

Abstract

Machine learning is pervasive today, but as more complex models are developed, their application is becoming increasingly costly. This work explores analog computing as a scalable and energy-efficient alternative to the typically used digital computations. Leveraging the BrainScaleS-2 system as an analog matrix multiplication accelerator, we optimize for typical imperfections like noise and saturation effects that measurably degrade accuracy. While training with hardware in the loop can partially recover lost accuracy, the gap can usually not be fully bridged, deterring potential users. Our research aims to narrow this gap through calibration parameter adjustments and algorithmic optimizations. We examine how the translation of operands to the hardware affects training and analyze the effects of calibration parameters to develop an overall better solution that is tuned for neural networks to ultimately enhance their accuracy. As the primary contribution, we find that higher circuit time constants, which lead to a higher average load on the analog amplifiers, allow for decreased amplifier gain at similar output amplitudes while improving noise behavior. By using this custom calibration together with statically scaled input operands, we achieve approximately 41 % accuracy improvement before retraining and 7 % afterward. Notably, we identify that the varying matrix sizes due to the varying number of neurons between layers require gain adjustments, which can potentially further increase accuracy. Our work addresses common challenges in the efficient execution of artificial neural networks on BrainScaleS-2, providing a path toward competitive usage. These findings have significant implications for cost-effective and energy-efficient machine learning model applications.

Document type: Master's thesis
Supervisor: Fröning, Prof. Dr. Holger
Place of Publication: Heidelberg
Date of thesis defense: 2023
Date Deposited: 30 Apr 2026 10:29
Date: 2026
Faculties / Institutes: Service facilities > Institut f. Technische Informatik (ZITI)
DDC-classification: 004 Data processing Computer science
Collection: Institute of Computer Engineering - Selected theses
About | FAQ | Contact | Imprint |
OA-LogoDINI certificate 2013Logo der Open-Archives-Initiative