The Lei Felca Problem Is Not About Linux. It's About Who's Responsible for Your Kids.

Brazil's new digital protection law puts operating systems in regulatory scope. I have opinions about this, and I'm going to share them.
I don’t usually write about politics. I actively avoid it. This blog is about code, security, and occasionally what I ate in Amsterdam.
But Lei Felca (Law 15.211/2025), which goes into force today in Brazil, March 17th, is not really a political issue for me. It’s a technical issue, a parenting philosophy issue, and honestly a personal one. So here we are.
Let me be upfront: I’m Brazilian. I love Brazil. I’m not writing this from a place of dismissal toward my home country. I’m writing this because I care enough to be frustrated.
What the law actually says
Lei Felca is officially the Digital Statute for Children and Adolescents. The name comes from a YouTuber whose video about child exploitation on social media hit 50 million views and accelerated the bill through Congress. The intent is genuinely good. Protecting kids online is a real problem worth solving.
The issue is Article 12.
Read that again. Terminal operating systems. Not just social networks. Not just streaming platforms. Operating systems.
Brazil is the first country in the world to put this obligation directly on OS providers at a national level. The UK didn’t do it. Australia didn’t do it. Even U.S. states that have similar laws in progress haven’t gone this far yet. Brazil went first, and “first” here means “into completely uncharted territory with no roadmap.”
The penalty for non-compliance is up to R$50 million in fines and, in the worst case, national blocking of the service enforced through ISPs and DNS providers. The Canonical Brazil entity (the company behind Ubuntu) is already on the ANPD’s radar. MidnightBSD, a small open source BSD variant, already proactively blocked Brazilian users rather than face the regulatory exposure.
Why this is genuinely bad for open source
I want to set aside the “will Linux be banned” panic for a second because I think it’s mostly overblown in the short term. The Marco Civil da Internet protects open technologies. Outright prohibition would require a court order. The most likely scenario isn’t a ban, it’s years of legal ambiguity that chills development and distribution.
That chilling effect is the real problem.
Open source projects don’t have legal departments. The Linux kernel is not a company. Debian doesn’t have a compliance team. When a Brazilian developer maintains a small Linux distribution as a passion project and sees “you may be subject to R$50 million fines,” they’re not going to lawyer up and fight it. They’re going to quietly stop distributing to Brazil, like MidnightBSD did. Or they’re going to stop altogether.
The law was written with centralized platforms in mind: Meta, TikTok, YouTube. Entities with engineering teams, legal teams, and the infrastructure to implement identity verification. “Just add age verification” is a burden those companies can absorb, however much I disagree with the privacy implications. For a distributed, community-maintained operating system with no single legal entity, it’s not a burden. It’s an impossibility.
And what does “age verification that can’t be self-declared and isn’t falsifiable” even look like for an OS? The law is deliberately vague here. The answer might involve biometrics, government ID integration like Gov.br, or some centralized identity system. Every one of those options involves collecting sensitive data from every single person who installs an operating system, turning the act of setting up your computer into a government identity checkpoint.
The part where I get personal
Here’s the thing I keep coming back to, and it’s not a technical argument. It’s a values argument.
My dad raised me with a very specific philosophy about the internet: it’s a tool, it’s powerful, it can be dangerous, and your job as a parent is to be present enough to guide how your kid uses it. He was involved. He asked questions. He knew what I was doing online. He set boundaries and enforced them, not through software locks, but through actual parenting.
I had a computer in my room as a teenager. I had internet access. And because my dad treated me like a person who was learning to navigate the world rather than a problem to be technically contained, I learned to actually navigate it. I learned to think critically about what I read. I learned when something felt wrong. Those skills came from a present parent, not from an OS-level age gate.
The philosophy behind Lei Felca is the opposite of that. It says: parents can’t be trusted to parent, so we’ll make it the responsibility of the operating system. The tech company. The software maintainer in another country who volunteers their time to keep a Linux distro alive. We’ll build a surveillance infrastructure and call it child protection.
That’s not protection. That’s abdication dressed up as policy.
I understand that not every parent has the time, resources, or technical literacy that my dad had. That’s a real problem. But the solution to that problem is not to make it someone else’s legal obligation to surveil every user of every piece of software. The solution is education, support resources for parents, and actual investment in digital literacy. None of those things are in Article 12.
The technical impossibility argument
I want to be clear about something: there is no technically sound way to implement unfalsifiable age verification at the OS level while preserving privacy. These two requirements are in direct tension.
A true unfalsifiable age check requires verifiable identity. Verifiable identity means linking your government ID to your operating system installation. That data has to live somewhere. Somewhere means it can be breached, subpoenaed, sold, or misused. The history of identity databases is not a history of secure, well-governed systems. It’s a history of data breaches, scope creep, and authoritarian governments finding creative uses for surveillance infrastructure that was built for something more benign.
The privacy-preserving alternative the Linux community has discussed, a local age signal that apps can query without exposing personal data, is trivially falsifiable. An adult can set any value. A child can set any value. This satisfies the letter of a weak interpretation of the law but does nothing for the stated goal. The law explicitly requires that the verification not be self-declared. So the privacy-preserving option isn’t compliant, and the compliant option isn’t private.
There’s no middle path here. That’s not a fixable engineering problem. It’s a fundamental conflict in the law’s requirements.
What I’d rather see
I genuinely want kids to be safer online. That’s not a controversial position. The question is how.
Content platform responsibility: hold social networks and streaming services accountable for the content they surface to minors. That’s appropriate and technically feasible.
Parental tooling investment: fund and promote actual parental control software, digital literacy programs in schools, and support resources for parents.
Liability for platforms that knowingly target minors with harmful content: yes, absolutely. Make that illegal and enforce it.
Mandatory age verification at the OS level with penalties that could reach a volunteer-maintained open source project: no. That’s the wrong target, the wrong mechanism, and the wrong mental model for who is responsible for a child’s wellbeing online.
On why I’m not going back
I get asked sometimes whether I’ll move back to Brazil. The honest answer is that I don’t know, and it’s not a simple question. I love home. I miss things about it every day.
But situations like this are part of why I’m not rushing. When a government’s instinct for a real problem is to build surveillance infrastructure and distribute regulatory liability to open source maintainers on other continents, that tells you something about the default direction of the policy environment. It tells you about how the relationship between citizens and the state is being conceived.
I want to live somewhere where the instinct is to empower people rather than constrain software. Where the answer to “kids are seeing bad things online” is “let’s support parents and hold platforms accountable” rather than “let’s make every OS a checkpoint.”
Brazil can be that place. It has been, in many ways. I hope it gets there. But today, on the day this law goes into effect, it is moving in the wrong direction.
The repos for everything I build are open source and public. I believe in open software. I believe in the internet being open. And I believe that the best protection for a kid online is an adult who’s paying attention.
That’s it. That’s my whole position.