Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
Type of Medium
Language
Region
Library
Years
Subjects(RVK)
  • 1
    UID:
    gbv_545231620
    ISSN: 0885-6230
    Note: Band: 19; Heft: 3; Seiten: 266-270
    In: International journal of geriatric psychiatry, Malden, Mass. : Wiley-Blackwell, 1986, 19(2004), 3, Seite 266-270, 0885-6230
    In: volume:19
    In: year:2004
    In: number:3
    In: pages:266-270
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    UID:
    gbv_1759103098
    Format: 1 Online-Ressource (1 PDF (xxi, 319 pages)) , illustrations (some color)
    Edition: Also available in print
    ISBN: 9781681738321 , 9781681738338
    Series Statement: Synthesis lectures in computer architecture #50
    Content: part I. Understanding deep neural networks -- 1. Introduction -- 1.1. Background on deep neural networks -- 1.2. Training versus inference -- 1.3. Development history -- 1.4. Applications of DNNs -- 1.5. Embedded versus cloud
    Content: 2. Overview of deep neural networks -- 2.1. Attributes of connections within a layer -- 2.2. Attributes of connections between layers -- 2.3. Popular types of layers in DNNs -- 2.4. Convolutional neural networks (CNNs) -- 2.5. Other DNNs -- 2.6. DNN development resources
    Content: part II. Design of hardware for processing DNNs -- 3. Key metrics and design objectives -- 3.1. Accuracy -- 3.2. Throughput and latency -- 3.3. Energy efficiency and power consumption -- 3.4. Hardware cost -- 3.5. Flexibility -- 3.6. Scalability -- 3.7. Interplay between different metrics
    Content: 4. Kernel computation -- 4.1. Matrix multiplication with Toeplitz -- 4.2. Tiling for optimizing performance -- 4.3. Computation transform optimizations -- 4.4. Summary
    Content: 5. Designing DNN accelerators -- 5.1. Evaluation metrics and design objectives -- 5.2. Key properties of DNN to leverage -- 5.3. DNN hardware design considerations -- 5.4. Architectural techniques for exploiting data reuse -- 5.5. Techniques to reduce reuse distance -- 5.6. Dataflows and loop nests -- 5.7. Dataflow taxonomy -- 5.8. DNN accelerator buffer management strategies -- 5.9. Flexible NoC design for DNN accelerators -- 5.10. Summary
    Content: 6. Operation mapping on specialized hardware -- 6.1. Mapping and loop nests -- 6.2. Mappers and compilers -- 6.3. Mapper organization -- 6.4. Analysis framework for energy efficiency -- 6.5. Eyexam : framework for evaluating performance -- 6.6. Tools for map space exploration
    Content: part III. Co-design of DNN hardware and algorithms -- 7. Reducing precision -- 7.1. Benefits of reduce precision -- 7.2. Determining the bit width -- 7.3. Mixed precision : different precision for different data types -- 7.4. Varying precision : change precision for different parts of the DNN -- 7.5. Binary nets -- 7.6. Interplay between precision and other design choices -- 7.7. Summary of design considerations for reducing precision
    Content: 8. Exploiting sparsity -- 8.1. Sources of sparsity -- 8.2. Compression -- 8.3. Sparse dataflow -- 8.4. Summary
    Content: 9. Designing efficient DNN models -- 9.1. Manual network design -- 9.2. Neural architecture search -- 9.3. Knowledge distillation -- 9.4. Design considerations for efficient DNN models
    Content: 10. Advanced technologies -- 10.1. Processing near memory -- 10.2. Processing in memory -- 10.3. Processing in sensor -- 10.4. Processing in the optical domain -- 11. Conclusion.
    Content: This book provides a structured treatment of the key principles and techniques for enabling efficient processing of deep neural networks (DNNs). DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Therefore, techniques that enable efficient processing of deep neural networks to improve key metrics--such as energy-efficiency, throughput, and latency--without sacrificing accuracy or increasing hardware costs are critical to enabling the wide deployment of DNNs in AI systems. The book includes background on DNN processing; a description and taxonomy of hardware architectural approaches for designing DNN accelerators; key metrics for evaluating and comparing different designs; features of DNN processing that are amenable to hardware/algorithm co-design to improve energy efficiency and throughput; and opportunities for applying new technologies. Readers will find a structured introduction to the field as well as formalization and organization of key concepts from contemporary work that provide insights that may spark new ideas
    Note: Part of: Synthesis digital library of engineering and computer science , Includes bibliographical references (pages 283-316) , Compendex , INSPEC , Google scholar , Google book search , Also available in print. , Mode of access: World Wide Web. , System requirements: Adobe Acrobat Reader.
    Additional Edition: ISBN 9781681738352
    Additional Edition: ISBN 9781681738314
    Additional Edition: Erscheint auch als Druck-Ausgabe ISBN 9781681738352
    Additional Edition: ISBN 9781681738314
    Language: English
    Subjects: Economics
    RVK:
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages