Ihre E-Mail wurde erfolgreich gesendet. Bitte prüfen Sie Ihren Maileingang.

Leider ist ein Fehler beim E-Mail-Versand aufgetreten. Bitte versuchen Sie es erneut.

Vorgang fortführen?

Exportieren
  • 1
    UID:
    almahu_9949697349902882
    Umfang: 1 online resource (306 pages)
    Ausgabe: 1st ed.
    ISBN: 0-443-15987-4
    Anmerkung: Intro -- Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams -- Copyright -- Contents -- Contributors -- About the editors -- Chapter 1: Introduction -- 1. Theme of the symposium -- 2. Teams and teamwork -- 3. Team situation awareness (TSA) -- 4. The trust dimension -- 5. Summary remarks -- References -- Chapter 2: Alternative paths to developing engineering solutions for human-machine teams -- 1. Introduction -- 2. Panel organization -- 3. Test vignettes -- 3.1. Urgent recommendations -- 3.2. Robotic support -- 3.3. Periodic teaming -- 4. Major issues raised -- 4.1. Trust and trustworthiness -- 4.2. Human-machine dialogue -- 4.3. Agency -- 4.4. Regulators -- 5. Future work -- 6. Summary of presentations by panel members -- 6.1. Andrzej Banaszuk -- 6.2. Bill Casebeer -- 6.3. Michael Fisher -- 6.4. Jean-Charles Ledé -- 7. Biographies of panelists and moderators -- References -- Chapter 3: Risk determination vs risk perception: From hate speech, an erroneous drone attack, and military nuclear waste ... -- 1. Introduction -- 2. Situation -- 3. Case studies -- 4. How to fix? -- 5. A work-in-progress: Future autonomous systems -- 6. Rationality -- 7. Deception -- 8. Innovation: A trade-off between innovation and suppression -- 9. Conclusions -- References -- Chapter 4: Appropriate context-dependent artificial trust in human-machine teamworkThis document is the result of the res ... -- 1. Introduction -- 2. Trust definition -- 3. Trust models, Krypta and Manifesta -- 3.1. Models -- 3.2. Manifesta and Krypta -- 3.3. Context-dependent models and their dimensions -- 4. Trust as a context-dependent model -- 4.1. Task -- 4.2. Team configuration -- 4.3. Summary: A taxonomy -- 5. Trust as a belief of trustworthiness -- 5.1. Forming (appropriate) artificial trust -- 5.2. Calibrating natural trust -- 6. Discussion and future work. , 7. Conclusion -- References -- Chapter 5: Toward a causal modeling approach for trust-based interventions in human-autonomy teamsThe views, opinions, an ... -- 1. Human-autonomy teams -- 2. Trust in human-autonomy teams -- 3. Trust measurement in HAT -- 4. Interventions and teaming -- 5. Our model human-autonomy teaming scenario -- 6. A brief overview of causal modeling -- 6.1. Bayesian networks -- 6.2. Structural equation modeling -- 7. Causal modeling in context -- 8. Conclusions -- References -- Chapter 6: Risk management in human-in-the-loop AI-assisted attention aware systems -- 1. Introduction -- 2. Attention aware systems -- 3. Risk management of attention aware systems -- 4. Risk management considerations -- 5. Risk management approaches -- 6. Discussion and conclusions -- References -- Chapter 7: Enabling trustworthiness in human-swarm systems through a digital twin -- 1. Introduction -- 2. Trustworthy human-swarm interaction -- 2.1. What is trust? -- 2.2. Trust in autonomous systems -- 2.3. Trust in multirobot systems -- 3. Industry-led trust requirements -- 3.1. Method -- 3.1.1. Perceptual cycle model -- 3.1.2. SWARM taxonomy -- 3.1.3. Interview questions -- 3.1.4. Equipment and procedure -- 3.1.5. Data analysis and results -- 4. Explainability of human-swarm systems -- 4.1. Research method -- 4.1.1. Participants -- 4.1.2. Equipment and procedure -- 4.1.3. Data analysis -- 4.2. Categories of swarm explanations -- 4.2.1. Consensus-based -- 4.2.2. Path planning-based -- 4.2.3. Communication-based -- 4.2.4. Scheduling-based -- 4.2.5. Hardware-based -- 4.2.6. Architecture and design-based -- 5. Use-case development -- 5.1. Cocreation process -- 5.1.1. Procedure -- 5.1.2. Analysis -- 5.2. Collated use case -- 5.2.1. Compiled use-case background -- 5.2.2. Compiled use-case agents and tasks -- 5.3. Discussion. , 6. Human-swarm teaming simulation platform -- 6.1. Simulated use case -- 7. Compliance with requirements -- 7.1. Requirements for software quality characteristics transformation -- 7.2. Simulation software quality characteristics -- 7.2.1. Functionality -- 7.2.2. Reliability -- 7.2.3. Usability -- 7.2.4. Efficiency -- 7.2.5. Maintainability -- 7.2.6. Portability -- 8. Discussion and conclusion -- Acknowledgments -- References -- Chapter 8: Building trust with the ethical affordances of education technologies: A sociotechnical systems perspective -- 1. Introduction -- 2. Operationalizing ethics in learning engineering: From values to assessment -- 2.1. Ethical sensemaking in design -- 2.2. Psychometrics and validity -- 2.3. Operationalizing ethics in psychometrics: Differential item function -- 3. AI-based technologies for instruction and assessment -- 3.1. AI-enabled learning and feedback -- 3.2. AI-enabled assessment -- 3.2.1. Computerized adaptive testing -- 3.2.2. Automated writing evaluation (AWE) -- 3.3. Plagiarism and automated plagiarism detection -- 3.3.1. Plagiarism detection -- 4. Knowledge management, learner records, and data lakes -- 4.1. Integrated and comprehensive learner records -- 5. Learning systems inside and outside higher education -- 5.1. Learning management systems -- 5.2. Massive open online courses -- 5.3. Communication back channels -- 5.4. The business ethics of virtual learning -- 6. Responsive and resilient design in learning engineering -- 6.1. Operationalizing social and ethical norms -- 6.2. Trust and trustworthiness -- 6.3. Reconceptualizing the role of learners and educators -- 6.4. Explainability and knowledge translation -- 6.5. Implementation validity of learning technologies -- 6.6. Knowledge management and common standards -- 6.7. Ethical pluralism and cross-cultural issues -- 7. Conclusion -- References. , Chapter 9: Perceiving a humorous robot as a social partner -- 1. Introduction -- 2. Background -- 2.1. Humor in human interactions -- 2.2. Humorous robots -- 2.3. Social recovery in HRI -- 2.4. Trust in human-robot teams -- 3. Humor and trust -- 4. Research questions -- 5. Method -- 5.1. The humor types -- 5.2. Programming the NAO -- 5.3. The iSpy testbed -- 6. Experiment -- 6.1. Procedure -- 6.2. Measures -- 6.3. Participants -- 7. Results -- 7.1. RQ1: Effect of humor type -- 7.2. RQ2: Effect of gender -- 7.3. RQ3: Effect of age -- 7.4. RQ4: Effect of previous NAO experience -- 7.5. RQ5: Effect of previous robot experience -- 8. Discussion and future work -- 9. Conclusion -- References -- Chapter 10: Real-time AI: Using AI on the tactical edge -- 1. Introduction -- 1.1. Imprecise computation in AI -- 1.2. AI on edge devices -- 1.3. Contributions -- 2. Problem definition -- 2.1. Imprecise computations model -- 3. Related work -- 4. Multitask neural network model -- 4.1. Neural networks -- 4.2. Our multitask model with sequential training -- 4.2.1. Experiment -- 4.2.2. Neural network specifications -- 4.2.3. Experimental results -- 4.2.4. Observations and discussion -- 4.3. Edge focused model -- 4.3.1. Experiment -- 4.3.2. Experimental results -- 4.3.3. Observations and discussion -- 4.4. Discussion -- 5. Scheduling -- 5.1. Scheduling model -- 5.2. Dynamic programming approach -- 5.3. Greedy algorithm -- 5.4. Experiments -- 5.5. Observations and discussion -- 6. Conclusions -- Acknowledgments -- References -- Chapter 11: Building a trustworthy digital twin: A brave new world of human machine teams and autonomous biological inter ... -- 1. Introduction -- 2. Examination of the current state of biosecurity: What does assured trust in BIoT look like? What happens when it break ... -- 2.1. Use case: Digital trust in AI driven BIoT in an era of pandemic. , 3. Security maturity of cyber-physical-biological systems in the biopharma sector -- 4. Antiquated biosafety and security net -- 4.1. BIoT gap analysis -- 5. BioSecure digital twin response -- 6. Trust between human-machine teams deploying AI driven digital twins -- 7. Zero-trust approach to biopharma cybersecurity -- 8. Trust framework for biological internet of things (BIoT) -- 9. Digital twin trust framework for human-machine teams -- 10. Digital twin opportunities and challenges to improve trust in human-machine teams -- 11. Future research and conclusion -- References -- Chapter 12: A framework of human factors methods for safe, ethical, and usable artificial intelligence in defense -- 1. Introduction -- 2. Method -- 2.1. Integration of responsible AI principles -- 2.2. Human factors and ergonomics methods review -- 2.3. Workshop 1: Mapping HFE methods to AI-based ADF capability life cycle phases -- 2.4. Workshop 2: Mapping of HFE methods to modified NATO principles of responsible AI use -- 3. Results -- 3.1. Applicability of HFE methods across the defense capability life cycle -- 3.2. Suitability of HFE methods for assessing principles of responsible AI use -- 3.3. Prototype framework of human factors and ergonomics methods for safe, ethical, and usable AI -- 4. Discussion -- 4.1. Future areas of research -- Appendix A Mapping of methods to the ADF capability life cycle phases. -- Appendix B Mapping of methods to each of the modified NATO principles of responsible use of AI. -- References -- Chapter 13: A schema for harms-sensitive reasoning, and an approach to populate its ontology by human annotation -- 1. Introduction: Chess bot incident begs for harms reasoning licensure -- 2. Generating values-driven behavior -- 3. Moral-scene assessment: Minds, and affordances to them -- 4. Injury: How physical harms come to be. , 4.1. Injury modeling and classification.
    Weitere Ausg.: ISBN 0-443-15988-2
    Sprache: Englisch
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
Schließen ⊗
Diese Webseite nutzt Cookies und das Analyse-Tool Matomo. Weitere Informationen finden Sie auf den KOBV Seiten zum Datenschutz