Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
IEEE transactions on very large scale integration (VLSI) systems, 2014-09, Vol.22 (9), p.2004-2016
Ort / Verlag
New York: IEEE
Erscheinungsjahr
2014
Quelle
IEEE Electronic Library Online
Beschreibungen/Notizen
Applications from several application domains exhibit the property of inherent application resilience, offering entirely new avenues for performance and power optimization by relaxing the conventional requirement of exact (numerical or Boolean) equivalence between the specification and hardware implementation. We propose scalable effort hardware as a design approach to tap the reservoir of application resilience and translate it into highly efficient hardware implementations. The first tenet of the scalable effort design approach is to identify mechanisms at each level of design abstraction (circuit, architecture, and algorithm) that can be used to vary the computational effort expended toward generation of the correct (exact) result, and to expose these mechanisms as control knobs in the implementation. These scaling mechanisms can be utilized to achieve improved energy efficiency while maintaining an acceptable (and often, near identical) level of quality of the overall result. The second tenet of the scalable effort design approach is that fully exploiting the potential of application resilience requires synergistic cross-layer optimization of scaling mechanisms identified at different levels of design abstraction. We have implemented an energy-efficient recognition and mining (RM) processor based on the proposed scalable effort design approach. Results from the execution of support vector machine training and classification, generalized learning vector quantization training, and k-means clustering on the scalable effort RM processor show that it can achieve energy reductions of 1.2×-5× with negligible impact on output quality, and 2.2×-50× with moderate loss in output quality, across various data sets. Our results also establish that cross-layer optimization across different scaling mechanisms leads to higher energy savings (1.4×-2× on an average) for a given output quality compared with each of the individual techniques.