Nicholas Carah, Sandro Demaio, Louise Holly, Ilona Kickbusch, Carmel Williams
{"title":"除了禁止:我们的数字世界必须对年轻人更安全。","authors":"Nicholas Carah, Sandro Demaio, Louise Holly, Ilona Kickbusch, Carmel Williams","doi":"10.1002/hpja.938","DOIUrl":null,"url":null,"abstract":"<p>Social media offers young people opportunities for connection, self-expression and learning, but not without increasingly evidenced health risks. A new report from the WHO suggests that 11% of adolescents show signs of problematic social media behaviours and experience negative consequences such as disrupted sleep.<span><sup>1</sup></span> As governments around the world grapple to find the balance between access and protections, the question arises: can we build a safer, more balanced digital space for children?</p><p>The regulation of children's use of social media is a growing global public health priority.<span><sup>2</sup></span> In the United States, legislation sets the age of 13 as the threshold for children creating new social media accounts. The European Union has considered raising the minimum age for social media access to 16, and in France, social media platforms must refuse access to children under 15 unless they have parental permission.</p><p>Australia's recently announced proposal is expected to go further, completely banning young people from social media.<span><sup>3</sup></span> This controversial idea hinges on the outcomes of ongoing trials involving age verification and assurance technologies.<span><sup>4</sup></span> While discussions around banning children from social media are happening in several countries, Australia's approach could reshape how social media companies manage user access for minors. It focuses on the effectiveness of two key technological solutions: age assurance and age verification.</p><p>Age assurance includes a variety of techniques designed to estimate or determine a user's age. Methods include self-reporting or parental verification, and even using technology like facial recognition or analysing scrolling habits. These methods, however, can easily be circumvented by savvy teens making enforcement of any ban difficult.</p><p>Meanwhile, age verification involves confirming a person's age by matching their identity to a verified source of information, such as a government-issued document. However, concerns arise over privacy and security risks, whether personal data are managed by the social media companies themselves or with third parties.<span><sup>5</sup></span></p><p>Beyond the technical challenges of determining user age, there are social and cultural risks associated with age verification systems. Young people often use social media to explore their identities and seek information they may not feel comfortable seeking from parents, teachers or peers.<span><sup>6</sup></span> Social media provides a level of anonymity, allowing them to ask questions about personal topics such as body image, sexuality and relationships. Age verification requirements could undermine this anonymity, stripping young users of their right to remain forgotten online.<span><sup>7</sup></span></p><p>Another issue is the potential to exacerbate digital divides. Navigating age verification systems may prove more difficult for young people with lower digital access or literacy, or limited access to the necessary identification documents. These individuals could be excluded from mainstream social media platforms altogether, potentially pushing them into more dangerous, less regulated corners of the internet.</p><p>Despite the many challenges, some social media companies have begun to address public concerns over the safety of children on their platforms. Two common strategies are the creation of advisory groups focused on user safety and the development of specialised apps and accounts for children. For example, Meta has established a ‘Safety Advisory Council’ and an ‘Instagram Suicide and Self-Injury Advisory Group’ to guide its child safety policies. However, little is known about how these groups operate or what impact their advice has had.</p><p>Meanwhile, platforms like YouTube have introduced child-specific versions, such as YouTube Kids, which offers a commercially lucrative ‘walled garden’ of content specifically curated for an ever-younger audience.</p><p>In September, Meta announced the creation of ‘teen accounts’ for Instagram, a feature aimed at the safety of younger users.<span><sup>8</sup></span> These accounts will have several key features, including accounts for users under 18 defaulting to private, and users under 16 needing parental permission to switch to a public profile. Teen accounts will limit interactions with strangers, restrict tagging and mentions, and apply content filters aimed at reducing exposure to harmful material. Young users will also receive notifications prompting them to log off after 60 min of use and turn on sleep mode, which mutes notifications overnight.</p><p>While these features may provide enhanced child and teen safety, they also raise concerns. Many of these ‘innovations’ are simply repackaged versions of pre-existing features and fail to represent protections on a scale that matches the known public health risks.<span><sup>9</sup></span> Moreover, these changes appear to further commercial interests. Instagram's vision for teens now focuses on fostering a more intimate, chat-based experience akin to Snapchat, while also incorporating TikTok-style entertainment through Reels.</p><p>Whether we ultimately decide to limit young people's use of social media or not, the more important task is to create online environments that enable these generations to flourish and be healthy in a digitalised world. Standalone bans are a potential quick-win for tech companies and policymakers but risk displacing important responsibilities that digital platforms have as commercial enterprises that make enormous profits from children's culture and private lives.</p><p>Young people do not want to be excluded from the digital world but do want to be better protected from inaccurate information and digital harms.<span><sup>10</sup></span> We must think critically about building social media spaces where young people are not targeted by advertisers looking to exploit their emotions or sell harmful products. Social media should prioritise users' health and well-being rather than aim to maximise attention and engagement at any cost.</p><p>Ultimately, young people deserve online spaces where they can explore their identities, form social connections, and express themselves freely, without being inundated with harmful content or manipulated by advertisers. Rather than focusing on restricting access, policymakers and platforms should be expected to provide high-quality, educational and supportive content that fosters healthy online experiences.</p><p>There are opportunities to draw on the lessons and experiences from the fields of public health and health promotion to apply similar approaches to tackle the risks and benefits associated with the digitalised world.<span><sup>2</sup></span> For example, public health and health promotion approaches have provided both safeguards to limit young people's exposure to hazardous products and also provided clear guidance on the safe levels of access and strategic incentives. Tobacco, alcohol and car and road safety are good examples of public health success, where the evidence is clear that unfettered access and/or exposure damages the health and well-being of young people.<span><sup>2, 11, 12</sup></span> In Australia and other similar countries, a diverse range of public health actions have been put in place to protect young people from the harm caused by unlimited exposure to these products. These include regulation and legislation, such as setting age limits; fiscal responses including taxes and levees on the products; reducing access through restrictions on where products can be sold and consumed; providing clear and accurate information and advice to the community; and guidelines on safe use of these products.</p><p>Holding multinational corporations, including the owners of social media platforms, accountable for the harm their products cause young people is critical, but it is the responsibility of governments to act to protect young people's rights both to engage in the digital world and to health and well-being.</p><p>Carmel Williams is the Editor-in-Chief of <i>HPJA</i> and a co-author of this article. They were excluded from editorial decision-making related to the acceptance and publication of this article. All authors declare no conflict of interest.</p>","PeriodicalId":47379,"journal":{"name":"Health Promotion Journal of Australia","volume":"36 1","pages":""},"PeriodicalIF":1.4000,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/hpja.938","citationCount":"0","resultStr":"{\"title\":\"Beyond banning: Our digital world must be made safer for young people\",\"authors\":\"Nicholas Carah, Sandro Demaio, Louise Holly, Ilona Kickbusch, Carmel Williams\",\"doi\":\"10.1002/hpja.938\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Social media offers young people opportunities for connection, self-expression and learning, but not without increasingly evidenced health risks. A new report from the WHO suggests that 11% of adolescents show signs of problematic social media behaviours and experience negative consequences such as disrupted sleep.<span><sup>1</sup></span> As governments around the world grapple to find the balance between access and protections, the question arises: can we build a safer, more balanced digital space for children?</p><p>The regulation of children's use of social media is a growing global public health priority.<span><sup>2</sup></span> In the United States, legislation sets the age of 13 as the threshold for children creating new social media accounts. The European Union has considered raising the minimum age for social media access to 16, and in France, social media platforms must refuse access to children under 15 unless they have parental permission.</p><p>Australia's recently announced proposal is expected to go further, completely banning young people from social media.<span><sup>3</sup></span> This controversial idea hinges on the outcomes of ongoing trials involving age verification and assurance technologies.<span><sup>4</sup></span> While discussions around banning children from social media are happening in several countries, Australia's approach could reshape how social media companies manage user access for minors. It focuses on the effectiveness of two key technological solutions: age assurance and age verification.</p><p>Age assurance includes a variety of techniques designed to estimate or determine a user's age. Methods include self-reporting or parental verification, and even using technology like facial recognition or analysing scrolling habits. These methods, however, can easily be circumvented by savvy teens making enforcement of any ban difficult.</p><p>Meanwhile, age verification involves confirming a person's age by matching their identity to a verified source of information, such as a government-issued document. However, concerns arise over privacy and security risks, whether personal data are managed by the social media companies themselves or with third parties.<span><sup>5</sup></span></p><p>Beyond the technical challenges of determining user age, there are social and cultural risks associated with age verification systems. Young people often use social media to explore their identities and seek information they may not feel comfortable seeking from parents, teachers or peers.<span><sup>6</sup></span> Social media provides a level of anonymity, allowing them to ask questions about personal topics such as body image, sexuality and relationships. Age verification requirements could undermine this anonymity, stripping young users of their right to remain forgotten online.<span><sup>7</sup></span></p><p>Another issue is the potential to exacerbate digital divides. Navigating age verification systems may prove more difficult for young people with lower digital access or literacy, or limited access to the necessary identification documents. These individuals could be excluded from mainstream social media platforms altogether, potentially pushing them into more dangerous, less regulated corners of the internet.</p><p>Despite the many challenges, some social media companies have begun to address public concerns over the safety of children on their platforms. Two common strategies are the creation of advisory groups focused on user safety and the development of specialised apps and accounts for children. For example, Meta has established a ‘Safety Advisory Council’ and an ‘Instagram Suicide and Self-Injury Advisory Group’ to guide its child safety policies. However, little is known about how these groups operate or what impact their advice has had.</p><p>Meanwhile, platforms like YouTube have introduced child-specific versions, such as YouTube Kids, which offers a commercially lucrative ‘walled garden’ of content specifically curated for an ever-younger audience.</p><p>In September, Meta announced the creation of ‘teen accounts’ for Instagram, a feature aimed at the safety of younger users.<span><sup>8</sup></span> These accounts will have several key features, including accounts for users under 18 defaulting to private, and users under 16 needing parental permission to switch to a public profile. Teen accounts will limit interactions with strangers, restrict tagging and mentions, and apply content filters aimed at reducing exposure to harmful material. Young users will also receive notifications prompting them to log off after 60 min of use and turn on sleep mode, which mutes notifications overnight.</p><p>While these features may provide enhanced child and teen safety, they also raise concerns. Many of these ‘innovations’ are simply repackaged versions of pre-existing features and fail to represent protections on a scale that matches the known public health risks.<span><sup>9</sup></span> Moreover, these changes appear to further commercial interests. Instagram's vision for teens now focuses on fostering a more intimate, chat-based experience akin to Snapchat, while also incorporating TikTok-style entertainment through Reels.</p><p>Whether we ultimately decide to limit young people's use of social media or not, the more important task is to create online environments that enable these generations to flourish and be healthy in a digitalised world. Standalone bans are a potential quick-win for tech companies and policymakers but risk displacing important responsibilities that digital platforms have as commercial enterprises that make enormous profits from children's culture and private lives.</p><p>Young people do not want to be excluded from the digital world but do want to be better protected from inaccurate information and digital harms.<span><sup>10</sup></span> We must think critically about building social media spaces where young people are not targeted by advertisers looking to exploit their emotions or sell harmful products. Social media should prioritise users' health and well-being rather than aim to maximise attention and engagement at any cost.</p><p>Ultimately, young people deserve online spaces where they can explore their identities, form social connections, and express themselves freely, without being inundated with harmful content or manipulated by advertisers. Rather than focusing on restricting access, policymakers and platforms should be expected to provide high-quality, educational and supportive content that fosters healthy online experiences.</p><p>There are opportunities to draw on the lessons and experiences from the fields of public health and health promotion to apply similar approaches to tackle the risks and benefits associated with the digitalised world.<span><sup>2</sup></span> For example, public health and health promotion approaches have provided both safeguards to limit young people's exposure to hazardous products and also provided clear guidance on the safe levels of access and strategic incentives. Tobacco, alcohol and car and road safety are good examples of public health success, where the evidence is clear that unfettered access and/or exposure damages the health and well-being of young people.<span><sup>2, 11, 12</sup></span> In Australia and other similar countries, a diverse range of public health actions have been put in place to protect young people from the harm caused by unlimited exposure to these products. These include regulation and legislation, such as setting age limits; fiscal responses including taxes and levees on the products; reducing access through restrictions on where products can be sold and consumed; providing clear and accurate information and advice to the community; and guidelines on safe use of these products.</p><p>Holding multinational corporations, including the owners of social media platforms, accountable for the harm their products cause young people is critical, but it is the responsibility of governments to act to protect young people's rights both to engage in the digital world and to health and well-being.</p><p>Carmel Williams is the Editor-in-Chief of <i>HPJA</i> and a co-author of this article. They were excluded from editorial decision-making related to the acceptance and publication of this article. All authors declare no conflict of interest.</p>\",\"PeriodicalId\":47379,\"journal\":{\"name\":\"Health Promotion Journal of Australia\",\"volume\":\"36 1\",\"pages\":\"\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2024-12-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/hpja.938\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Health Promotion Journal of Australia\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/hpja.938\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PUBLIC, ENVIRONMENTAL & OCCUPATIONAL HEALTH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health Promotion Journal of Australia","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/hpja.938","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PUBLIC, ENVIRONMENTAL & OCCUPATIONAL HEALTH","Score":null,"Total":0}
Beyond banning: Our digital world must be made safer for young people
Social media offers young people opportunities for connection, self-expression and learning, but not without increasingly evidenced health risks. A new report from the WHO suggests that 11% of adolescents show signs of problematic social media behaviours and experience negative consequences such as disrupted sleep.1 As governments around the world grapple to find the balance between access and protections, the question arises: can we build a safer, more balanced digital space for children?
The regulation of children's use of social media is a growing global public health priority.2 In the United States, legislation sets the age of 13 as the threshold for children creating new social media accounts. The European Union has considered raising the minimum age for social media access to 16, and in France, social media platforms must refuse access to children under 15 unless they have parental permission.
Australia's recently announced proposal is expected to go further, completely banning young people from social media.3 This controversial idea hinges on the outcomes of ongoing trials involving age verification and assurance technologies.4 While discussions around banning children from social media are happening in several countries, Australia's approach could reshape how social media companies manage user access for minors. It focuses on the effectiveness of two key technological solutions: age assurance and age verification.
Age assurance includes a variety of techniques designed to estimate or determine a user's age. Methods include self-reporting or parental verification, and even using technology like facial recognition or analysing scrolling habits. These methods, however, can easily be circumvented by savvy teens making enforcement of any ban difficult.
Meanwhile, age verification involves confirming a person's age by matching their identity to a verified source of information, such as a government-issued document. However, concerns arise over privacy and security risks, whether personal data are managed by the social media companies themselves or with third parties.5
Beyond the technical challenges of determining user age, there are social and cultural risks associated with age verification systems. Young people often use social media to explore their identities and seek information they may not feel comfortable seeking from parents, teachers or peers.6 Social media provides a level of anonymity, allowing them to ask questions about personal topics such as body image, sexuality and relationships. Age verification requirements could undermine this anonymity, stripping young users of their right to remain forgotten online.7
Another issue is the potential to exacerbate digital divides. Navigating age verification systems may prove more difficult for young people with lower digital access or literacy, or limited access to the necessary identification documents. These individuals could be excluded from mainstream social media platforms altogether, potentially pushing them into more dangerous, less regulated corners of the internet.
Despite the many challenges, some social media companies have begun to address public concerns over the safety of children on their platforms. Two common strategies are the creation of advisory groups focused on user safety and the development of specialised apps and accounts for children. For example, Meta has established a ‘Safety Advisory Council’ and an ‘Instagram Suicide and Self-Injury Advisory Group’ to guide its child safety policies. However, little is known about how these groups operate or what impact their advice has had.
Meanwhile, platforms like YouTube have introduced child-specific versions, such as YouTube Kids, which offers a commercially lucrative ‘walled garden’ of content specifically curated for an ever-younger audience.
In September, Meta announced the creation of ‘teen accounts’ for Instagram, a feature aimed at the safety of younger users.8 These accounts will have several key features, including accounts for users under 18 defaulting to private, and users under 16 needing parental permission to switch to a public profile. Teen accounts will limit interactions with strangers, restrict tagging and mentions, and apply content filters aimed at reducing exposure to harmful material. Young users will also receive notifications prompting them to log off after 60 min of use and turn on sleep mode, which mutes notifications overnight.
While these features may provide enhanced child and teen safety, they also raise concerns. Many of these ‘innovations’ are simply repackaged versions of pre-existing features and fail to represent protections on a scale that matches the known public health risks.9 Moreover, these changes appear to further commercial interests. Instagram's vision for teens now focuses on fostering a more intimate, chat-based experience akin to Snapchat, while also incorporating TikTok-style entertainment through Reels.
Whether we ultimately decide to limit young people's use of social media or not, the more important task is to create online environments that enable these generations to flourish and be healthy in a digitalised world. Standalone bans are a potential quick-win for tech companies and policymakers but risk displacing important responsibilities that digital platforms have as commercial enterprises that make enormous profits from children's culture and private lives.
Young people do not want to be excluded from the digital world but do want to be better protected from inaccurate information and digital harms.10 We must think critically about building social media spaces where young people are not targeted by advertisers looking to exploit their emotions or sell harmful products. Social media should prioritise users' health and well-being rather than aim to maximise attention and engagement at any cost.
Ultimately, young people deserve online spaces where they can explore their identities, form social connections, and express themselves freely, without being inundated with harmful content or manipulated by advertisers. Rather than focusing on restricting access, policymakers and platforms should be expected to provide high-quality, educational and supportive content that fosters healthy online experiences.
There are opportunities to draw on the lessons and experiences from the fields of public health and health promotion to apply similar approaches to tackle the risks and benefits associated with the digitalised world.2 For example, public health and health promotion approaches have provided both safeguards to limit young people's exposure to hazardous products and also provided clear guidance on the safe levels of access and strategic incentives. Tobacco, alcohol and car and road safety are good examples of public health success, where the evidence is clear that unfettered access and/or exposure damages the health and well-being of young people.2, 11, 12 In Australia and other similar countries, a diverse range of public health actions have been put in place to protect young people from the harm caused by unlimited exposure to these products. These include regulation and legislation, such as setting age limits; fiscal responses including taxes and levees on the products; reducing access through restrictions on where products can be sold and consumed; providing clear and accurate information and advice to the community; and guidelines on safe use of these products.
Holding multinational corporations, including the owners of social media platforms, accountable for the harm their products cause young people is critical, but it is the responsibility of governments to act to protect young people's rights both to engage in the digital world and to health and well-being.
Carmel Williams is the Editor-in-Chief of HPJA and a co-author of this article. They were excluded from editorial decision-making related to the acceptance and publication of this article. All authors declare no conflict of interest.
期刊介绍:
The purpose of the Health Promotion Journal of Australia is to facilitate communication between researchers, practitioners, and policymakers involved in health promotion activities. Preference for publication is given to practical examples of policies, theories, strategies and programs which utilise educational, organisational, economic and/or environmental approaches to health promotion. The journal also publishes brief reports discussing programs, professional viewpoints, and guidelines for practice or evaluation methodology. The journal features articles, brief reports, editorials, perspectives, "of interest", viewpoints, book reviews and letters.