Ihre E-Mail wurde erfolgreich gesendet. Bitte prüfen Sie Ihren Maileingang.

Leider ist ein Fehler beim E-Mail-Versand aufgetreten. Bitte versuchen Sie es erneut.

Vorgang fortführen?

Exportieren
  • 1
    Online-Ressource
    Online-Ressource
    JMIR Publications Inc. ; 2023
    In:  Journal of Medical Internet Research Vol. 25 ( 2023-9-15), p. e49061-
    In: Journal of Medical Internet Research, JMIR Publications Inc., Vol. 25 ( 2023-9-15), p. e49061-
    Kurzfassung: Throughout the COVID-19 pandemic, there has been a concern that social media may contribute to vaccine hesitancy due to the wide availability of antivaccine content on social media platforms. YouTube has stated its commitment to removing content that contains misinformation on vaccination. Nevertheless, such claims are difficult to audit. There is a need for more empirical research to evaluate the actual prevalence of antivaccine sentiment on the internet. Objective This study examines recommendations made by YouTube’s algorithms in order to investigate whether the platform may facilitate the spread of antivaccine sentiment on the internet. We assess the prevalence of antivaccine sentiment in recommended videos and evaluate how real-world users’ experiences are different from the personalized recommendations obtained by using synthetic data collection methods, which are often used to study YouTube’s recommendation systems. Methods We trace trajectories from a credible seed video posted by the World Health Organization to antivaccine videos, following only video links suggested by YouTube’s recommendation system. First, we gamify the process by asking real-world participants to intentionally find an antivaccine video with as few clicks as possible. Having collected crowdsourced trajectory data from respondents from (1) the World Health Organization and United Nations system (nWHO/UN=33) and (2) Amazon Mechanical Turk (nAMT=80), we next compare the recommendations seen by these users to recommended videos that are obtained from (3) the YouTube application programming interface’s RelatedToVideoID parameter (nRTV=40) and (4) from clean browsers without any identifying cookies (nCB=40), which serve as reference points. We develop machine learning methods to classify antivaccine content at scale, enabling us to automatically evaluate 27,074 video recommendations made by YouTube. Results We found no evidence that YouTube promotes antivaccine content; the average share of antivaccine videos remained well below 6% at all steps in users’ recommendation trajectories. However, the watch histories of users significantly affect video recommendations, suggesting that data from the application programming interface or from a clean browser do not offer an accurate picture of the recommendations that real users are seeing. Real users saw slightly more provaccine content as they advanced through their recommendation trajectories, whereas synthetic users were drawn toward irrelevant recommendations as they advanced. Rather than antivaccine content, videos recommended by YouTube are likely to contain health-related content that is not specifically related to vaccination. These videos are usually longer and contain more popular content. Conclusions Our findings suggest that the common perception that YouTube’s recommendation system acts as a “rabbit hole” may be inaccurate and that YouTube may instead be following a “blockbuster” strategy that attempts to engage users by promoting other content that has been reliably successful across the platform.
    Materialart: Online-Ressource
    ISSN: 1438-8871
    Sprache: Englisch
    Verlag: JMIR Publications Inc.
    Publikationsdatum: 2023
    ZDB Id: 2028830-X
    Bibliothek Standort Signatur Band/Heft/Jahr Verfügbarkeit
    BibTip Andere fanden auch interessant ...
Schließen ⊗
Diese Webseite nutzt Cookies und das Analyse-Tool Matomo. Weitere Informationen finden Sie auf den KOBV Seiten zum Datenschutz