Ihre E-Mail wurde erfolgreich gesendet. Bitte prüfen Sie Ihren Maileingang.

Leider ist ein Fehler beim E-Mail-Versand aufgetreten. Bitte versuchen Sie es erneut.

Vorgang fortführen?

Exportieren
  • 1
    UID:
    almahu_9949767382902882
    Umfang: 1 online resource (249 pages)
    Ausgabe: 1st ed.
    ISBN: 9783031548277
    Anmerkung: Intro -- Foreword by Florian Schütz -- Foreword by Jan Kleijssen -- Preface -- Acknowledgments -- Contents -- List of Contributors -- Reviewers -- Acronyms -- Part I Introduction -- 1 From Deep Neural Language Models to LLMs -- 1.1 What LLMs Are and What LLMs Are Not -- 1.2 Principles of LLMs -- 1.2.1 Deep Neural Language Models -- 1.2.2 Generative Deep Neural Language Models -- 1.2.3 Generating Text -- 1.2.4 Memorization vs Generalization -- 1.2.5 Effect of the Model and Training Dataset Size -- References -- 2 Adapting LLMs to Downstream Applications -- 2.1 Prompt Optimization -- 2.2 Pre-Prompting and Implicit Prompting -- 2.3 Model Coordination: Actor-Agents -- 2.4 Integration with Tools -- 2.5 Parameter-Efficient Fine-Tuning -- 2.6 Fine-Tuning -- 2.7 Further Pretraining -- 2.8 From-Scratch Re-Training -- 2.9 Domain-Specific Distillation -- References -- 3 Overview of Existing LLM Families -- 3.1 Introduction -- 3.2 Pre-Transformer LLMs -- 3.3 BERT and Friends -- 3.4 GPT Family Proper -- 3.5 Generative Autoregressors (GPT Alternatives) -- 3.6 Compute-Optimal Models -- 3.6.1 LLaMA Family -- 3.7 Full-Transformer/Sequence-to-Sequence Models -- 3.8 Multimodal and Mixture-of-Experts Models -- 3.8.1 Multimodal Visual LLMs -- 3.8.2 Pathways Language Model, PaLM -- 3.8.3 GPT-4 and BingChat -- References -- 4 Conversational Agents -- 4.1 Introduction -- 4.2 GPT Related Conversational Agents -- 4.3 Alternative Conversational Agent LLMs -- 4.3.1 Conversational Agents Without Auxiliary Capabilities -- 4.3.2 Conversational Agents With Auxiliary Capabilities -- 4.3.2.1 Models With Non-Knowledge Auxiliary Capabilities -- 4.4 Conclusion -- References -- 5 Fundamental Limitations of Generative LLMs -- 5.1 Introduction -- 5.2 Generative LLMs Cannot Be Factual -- 5.3 Generative LLMs With Auxiliary Tools Still Struggle To Be Factual. , 5.4 Generative LLMs Will Leak Private Information -- 5.5 Generative LLMs Have Trouble With Reasoning -- 5.6 Generative LLMs Forget Fast and Have a Short Attention Span -- 5.7 Generative LLMs Are Only Aware of What They Saw at Training -- 5.8 Generative LLMs Can Generate Highly Inappropriate Texts -- 5.9 Generative LLMs Learn and Perpetrate Societal Bias -- References -- 6 Tasks for LLMs and Their Evaluation -- 6.1 Introduction -- 6.2 Natural Language Tasks -- 6.2.1 Reading Comprehension -- 6.2.2 Question Answering -- 6.2.3 Common Sense Reasoning -- 6.2.4 Natural Language Generation -- 6.3 Conclusion -- References -- Part II LLMs in Cybersecurity -- 7 Private Information Leakage in LLMs -- 7.1 Introduction -- 7.2 Information Leakage -- 7.3 Extraction -- 7.4 Jailbreaking -- 7.5 Conclusions -- References -- 8 Phishing and Social Engineering in the Age of LLMs -- 8.1 LLMs in Phishing and Social Engineering -- 8.2 Case Study: Orchestrating Large-Scale Scam Campaigns -- 8.3 Case Study: Shā Zhū Pán Attacks -- References -- 9 Vulnerabilities Introduced by LLMs Through Code Suggestions -- 9.1 Introduction -- 9.2 Relationship Between LLMs and Code Security -- 9.2.1 Vulnerabilities and Risks Introduced by LLM-Generated Code -- 9.3 Mitigating Security Concerns With LLM-Generated Code -- 9.4 Conclusion and The Path Forward -- References -- 10 LLM Controls Execution Flow Hijacking -- 10.1 Faulting Controls: The Genesis of Execution Flow Hijacking -- 10.2 Unpacking Execution Flow: LLMs' Sensitivity to User-Provided Text -- 10.3 Examples of LLMs Execution Flow Attacks -- 10.4 Securing Uncertainty: Security Challenges in LLMs -- 10.5 Security by Design: Shielding Probabilistic Execution Flows -- References -- 11 LLM-Aided Social Media Influence Operations -- 11.1 Introduction -- 11.2 Salience of LLMs -- 11.3 Potential Impact -- 11.4 Mitigation -- References. , 12 Deep(er) Web Indexing with LLMs -- 12.1 Introduction -- 12.2 Innovation Through Integration of LLMs -- 12.3 Navigating Complexities: Challenges and Mitigation Strategies -- 12.3.1 Desired Behavior of LLM-Based Search Query Creation Tools -- 12.3.2 Engineering Challenges and Mitigations -- 12.3.2.1 Ethical and Security Concerns -- 12.3.2.2 Fidelity of Query Responses and Model Accuracy -- 12.3.2.3 Linguistic and Regulatory Variations -- 12.3.2.4 Handling Ambiguous Queries -- 12.4 Key Takeaways -- 12.5 Conclusion and Reflections -- References -- Part III Tracking and Forecasting Exposure -- 13 LLM Adoption Trends and Associated Risks -- 13.1 Introduction -- 13.2 In-Context Learning vs Fine-Tuning -- 13.3 Adoption Trends -- 13.3.1 LLM Agents -- 13.4 Potential Risks -- References -- 14 The Flow of Investments in the LLM Space -- 14.1 General Context: Investments in the Sectors of AI, ML, and Text Analytics -- 14.2 Discretionary Evidence -- 14.3 Future Work with Methods Already Applied to AI and ML -- References -- 15 Insurance Outlook for LLM-Induced Risk -- 15.1 General Context of Cyber Insurance -- 15.1.1 Cyber-Risk Insurance -- 15.1.2 Cybersecurity and Breaches Costs -- 15.2 Outlook for Estimating the Insurance Premia of LLMs Cyber Insurance -- References -- 16 Copyright-Related Risks in the Creation and Useof ML/AI Systems -- 16.1 Introduction -- 16.2 Concerns of Owners of Copyrighted Works -- 16.3 Concerns of Users Who Incorporate Content Generated by ML/AI Systems Into Their Creations -- 16.4 Mitigating the Risks -- References -- 17 Monitoring Emerging Trends in LLM Research -- 17.1 Introduction -- 17.2 Background -- 17.3 Data and Methods: Noun Extraction -- 17.4 Results -- 17.4.1 Domain Experts Validation and Interpretations -- 17.5 Discussion, Limitations and Further Research -- 17.6 Conclusion -- References -- Part IV Mitigation. , 18 Enhancing Security Awareness and Education for LLMs -- 18.1 Introduction -- 18.2 Security Landscape of LLMs -- 18.3 Foundations of LLM Security Education -- 18.4 The Role of Education in Sub-Areas of LLM Deployment and Development -- 18.5 Empowering Users Against Security Breaches and Risks -- 18.6 Advanced Security Training for LLM Users -- 18.7 Conclusion and the Path Forward -- References -- 19 Towards Privacy Preserving LLMs Training -- 19.1 Introduction -- 19.2 Dataset Pre-processing with Anonymization and De-duplication -- 19.3 Differential Privacy for Fine-Tuning Models -- 19.4 Differential Privacy for Deployed Models -- 19.5 Conclusions -- References -- 20 Adversarial Evasion on LLMs -- 20.1 Introduction -- 20.2 Evasion Attacks in Image Classification -- 20.3 Impact of Evasion Attacks on the Theory of Deep Learning -- 20.4 Evasion Attacks for Language Processing and Applicability to Large Language Models -- References -- 21 Robust and Private Federated Learning on LLMs -- 21.1 Introduction -- 21.1.1 Peculiar Challenges of LLMs -- 21.2 Robustness to Malicious Clients -- 21.3 Privacy Protection of Clients' Data -- 21.4 Synthesis of Robustness and Privacy -- 21.5 Concluding Remarks -- References -- 22 LLM Detectors -- 22.1 Introduction -- 22.2 LLMs' Salience -- 22.2.1 General Detectors -- 22.2.2 Specific Detectors -- 22.3 Potential Mitigation -- 22.3.1 Watermarking -- 22.3.2 DetectGPT -- 22.3.3 Retrieval Based -- 22.4 Mitigation -- References -- 23 On-Site Deployment of LLMs -- 23.1 Introduction -- 23.2 Open-Source Development -- 23.3 Technical Solution -- 23.3.1 Serving -- 23.3.2 Quantization -- 23.3.3 Energy Costs -- 23.4 Risk Assessment -- References -- 24 LLMs Red Teaming -- 24.1 History and Evolution of Red-Teaming Large Language Models -- 24.2 Making LLMs Misbehave -- 24.3 Attacks -- 24.3.1 Classes of Attacks on Large Language Models. , 24.3.1.1 Prompt-Level Attacks -- 24.3.1.2 Contextual Limitations: A Fundamental Weakness -- 24.3.1.3 Mechanisms of Distractor and Formatting Attacks -- 24.3.1.4 The Role of Social Engineering -- 24.3.1.5 Integration of Fuzzing and Automated Machine Learning Techniques for Scalability -- 24.4 Datasets -- 24.5 Defensive Mechanisms Against Manual and Automated Attacks on LLMs -- 24.6 The Future -- Appendix -- References -- 25 Standards for LLM Security -- 25.1 Introduction -- 25.2 The Cybersecurity Landscape -- 25.2.1 MITRE CVEs -- 25.2.2 CWE -- 25.2.3 MITRE ATT& -- CK and Cyber Kill Chain -- 25.3 Existing Standards -- 25.3.1 AI RMF Playbook -- 25.3.2 OWASP Top 10 for LLMs -- 25.3.3 AI Vulnerability Database -- 25.3.4 MITRE ATLAS -- 25.4 Looking Ahead -- References -- Part V Conclusion -- 26 Exploring the Dual Role of LLMs in Cybersecurity: Threats and Defenses -- 26.1 Introduction -- 26.2 LLM Vulnerabilities -- 26.2.1 Security Concerns -- 26.2.1.1 Data Leakage -- 26.2.1.2 Toxic Content -- 26.2.1.3 Disinformation -- 26.2.2 Attack Vectors -- 26.2.2.1 Backdoor Attacks -- 26.2.2.2 Prompt Injection Attacks -- 26.2.3 Testing LLMs -- 26.3 Code Creation Using LLMs -- 26.3.1 How Secure is LLM-Generated Code? -- 26.3.2 Generating Malware -- 26.4 Shielding with LLMs -- 26.5 Conclusion -- References -- 27 Towards Safe LLMs Integration -- 27.1 Introduction -- 27.2 The Attack Surface -- 27.3 Impact -- 27.4 Mitigation -- References.
    Weitere Ausg.: Print version: Kucharavy, Andrei Large Language Models in Cybersecurity Cham : Springer International Publishing AG,c2024 ISBN 9783031548260
    Sprache: Englisch
    Schlagwort(e): Electronic books.
    URL: Volltext  (kostenfrei)
    URL: Volltext  (kostenfrei)
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
Schließen ⊗
Diese Webseite nutzt Cookies und das Analyse-Tool Matomo. Weitere Informationen finden Sie auf den KOBV Seiten zum Datenschutz