Australia’s internet regulator has criticised the world’s biggest social platforms of failing to properly enforce the country’s ban on under-16s using their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including allowing banned users to repeatedly attempt age verification and insufficient measures to prevent new accounts. In its first compliance report since the prohibition came into force, the regulator identified multiple shortcomings and has now moved from monitoring to active enforcement, warning that platforms must demonstrate they have implemented “appropriate systems and processes” to prevent children under 16 from accessing their services.
Regulatory Breaches Uncovered in First Major Review
Australia’s eSafety Commissioner has detailed a troubling pattern of failure to comply amongst the world’s most prominent social media platforms in her inaugural review following the ban took effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have collectively neglected to establish adequate safeguards to prevent minors from using their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification processes, noting that some platforms have allowed children who originally stated themselves under 16 to later assert they were older, thereby undermining the law’s intent.
The findings represent a significant escalation in the regulatory action, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has made clear that merely demonstrating some children still maintain accounts is insufficient; platforms must rather provide concrete evidence that they have put in place comprehensive systems and procedures intended to stop under-16s from opening accounts in the first place. This shift signals the government’s commitment to ensure tech giants responsible, with potential penalties looming for companies that do not meet the legal requirements.
- Permitting previously banned users to re-verify their age and regain account access
- Allowing repeated attempts at the identical verification process without consequences
- Inadequate safeguards to block new under-16 accounts from being opened
- Limited reporting tools for parents and members of the public
- Lack of transparent data about regulatory measures and user account terminations
The Magnitude of the Issue
The considerable scale of social media activity amongst young Australians underscores the regulatory challenge facing both the authorities and the platforms in question. With numerous accounts already removed or restricted since the ban’s implementation, the figures paint a picture of extensive early non-compliance. The eSafety Commissioner’s findings indicate that the technical and procedural obstacles to implementing age restrictions have turned out to be considerably more complex than anticipated, with platforms having difficulty to distinguish genuine age declarations from false claims. This intricacy has placed enforcement authorities grappling with the fundamental question of whether current age verification technologies are adequate to the task.
Beyond the technical obstacles lies a wider issue about the readiness of companies to prioritise compliance over user growth. Social media companies have consistently opposed stringent age verification measures, citing privacy concerns and the real challenge of confirming age online. However, the Commissioner’s report suggests that some platforms might not be demonstrating sufficient effort to deploy the infrastructure required by law. The move to active enforcement represents a pivotal moment: either platforms will substantially upgrade their compliance infrastructure, or they risk facing significant penalties that could reshape their business models in Australia and potentially influence regulatory approaches internationally.
What the Data Shows
In the first month following the ban’s implementation, Australian regulators indicated that 4.7 million accounts had been suspended or removed. Whilst this figure initially appeared to demonstrate compliance achievement, later review reveals a more nuanced picture. The sheer volume of account deletions implies that many under-16s had successfully created accounts in the beginning, indicating that preventive controls were inadequate. Additionally, the data prompts inquiry about whether deleted profiles represent genuine enforcement or just users removing their accounts of their own accord in in light of the latest limitations.
The restricted transparency concerning these figures has disappointed independent observers trying to determine the ban’s actual effectiveness. Platforms have disclosed little data about their implementation approaches, performance indicators, or the characteristics of deleted profiles. This opacity makes it challenging for regulators and the public to assess whether the ban is operating as planned or whether young people are merely discovering different means to use social media. The Commissioner’s insistence on detailed evidence of consistent enforcement practices reflects growing frustration with platforms’ reluctance to provide comprehensive data.
Sector Reaction and Pushback
The social media giants have addressed the regulatory enforcement measures with a combination of assurances of compliance and scepticism about the ban’s practicality. Meta, which runs Facebook and Instagram, emphasised its dedication to adhering to Australian law whilst at the same time contending that accurate age determination remains a major challenge across the industry. The company has called for a alternative strategy, suggesting that strong age verification systems and parental consent requirements implemented at the application store level would be more effective than platform-level enforcement. This stance reflects broader industry concerns that the current regulatory framework puts an unrealistic burden on individual platforms.
Snap, the developer of Snapchat, has taken a more proactive public stance, stating that it had locked 450,000 accounts since the ban took effect and claiming to continue locking more daily. However, sector analysts dispute whether such figures demonstrate genuine compliance or simply represent reactive account management. The fundamental tension between platforms’ commercial structures—which historically relied on maximising user engagement and growth—and the statutory obligation to actively exclude an entire age demographic remains unresolved. Companies have long resisted stringent age verification, citing privacy issues and technical constraints, creating a standoff between regulators and platforms over who bears responsibility for implementation.
- Meta argues age verification should occur at app store level rather than on individual platforms
- Snap claims to have locked 450,000 accounts since the ban’s implementation in December
- Industry groups cite privacy issues and technical obstacles as impediments to effective age verification
- Platforms contend they are doing their best whilst questioning the ban’s general effectiveness
Wider Questions Concerning the Ban’s Effectiveness
As Australia’s under-16 social media ban moves into its enforcement phase, fundamental questions remain about whether the legislation will accomplish its intended goals or merely push young users towards less regulated platforms. The regulator’s first compliance report reveals that following implementation, significant loopholes remain—children continue finding ways to bypass age verification systems, and platforms have struggled to prevent new underage accounts from being created. Critics argue that the ban’s success depends not merely on regulatory vigilance but on whether young people will truly leave mainstream platforms or simply migrate to alternative services, encrypted messaging applications, or VPNs designed to mask their age and location.
The ban’s worldwide effects contribute further complexity to assessments of its success. Countries such as the United Kingdom, Canada, and several European nations are watching Australia’s initiative closely, exploring similar laws for their own citizens. If the ban does not successfully reduce children’s digital engagement or fails to protect them from dangerous online content, it could damage the case for comparable regulations elsewhere. Conversely, if implementation proves sufficiently strict to truly restrict underage usage, it may encourage other governments to pursue similar approaches. The result will likely influence global regulatory trends for years to come, making Australia’s enforcement efforts analysed far beyond its borders.
Who Benefits and Who Loses
Mental health advocates and child safety organisations have endorsed the ban as a essential measure against algorithmic manipulation and contact with harmful content. Parents and educators contend that removing young Australians platforms built to maximise engagement could lower anxiety levels, improve sleep patterns, and decrease exposure to cyberbullying. Tech companies’ own research has acknowledged the mental health risks linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also eliminates valid applications of social media for young people—keeping friendships alive, obtaining educational material, and engaging with online communities around common interests. The regulatory approach assumes harm exceeds benefit, a calculation that some young people and their families challenge.
The ban’s concrete implications reaches past individual users to influence content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have followed creative careers through platforms like TikTok or Instagram now confront legal barriers to participation. Small Australian businesses that rely on social media marketing lose access to younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously employed effectively. Meanwhile, the ban unintentionally advantages large technology companies with resources to build age verification infrastructure, potentially strengthening their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects extend far beyond the simple goal of child protection.
What Happens Next for Enforcement
Australia’s eSafety Commissioner has signalled a notable transition from passive monitoring to proactive action, marking a key milestone in the rollout of the under-16 ban. The authority will now compile information to determine whether platforms have neglected to implement “reasonable steps” to block minors from using, a statutory benchmark that goes further than simply documenting that young people stay within these platforms. This approach demands concrete evidence that organisations have introduced appropriate systems and protocols designed to exclude minors. The enforcement team has signalled it will conduct enquiries systematically, developing arguments that could lead to substantial penalties for non-compliance. This move from monitoring to intervention reveals growing frustration with the platforms’ current efforts and suggests that voluntary cooperation alone will no longer suffice.
The rollout phase presents significant concerns about the appropriateness of fines and the practical mechanisms for holding tech giants accountable. Australia’s statutory provisions delivers enforcement instruments, but their efficacy hinges on the eSafety Commissioner’s readiness to undertake formal action and the platforms’ capability to adjust substantively. Global regulators, notably regulators in the United Kingdom and European Union, will closely monitor Australia’s implementation tactics and consequences. A robust enforcement effort could set a model for further jurisdictions evaluating equivalent prohibitions, whilst failure might undermine the comprehensive regulatory system. The next phase will be critical whether Australia’s pioneering regulatory approach delivers genuine protection for adolescents or stays primarily ceremonial in its influence.
