Quick Hits
For DC-area readers: Brian Chau of Alliance for the Future and the Substack From The New World and I are hosting a meetup for in Capitol Hill on Wednesday, May 1 at 5:30 PM. Details here; please RSVP even if you are a maybe. It helps us know how much space to book.
I had a piece in The Dispatch about creating digital public infrastructure for online identity. This is not an idea I endorse outright, but one I believe we should explore more seriously. We’ll probably get such an infrastructure either way, so the question is not so much whether we want it, but whether we want the government to provide it. Also, I appeared on Brian Chau’s From the New World podcast to discuss my recent National Affairs piece, “How to Regulate Artificial Intelligence,” the Chinese AI ecosystem, and the benefits of open-source AI.
NASA, fed up with ballooning costs and elongating timelines for its planned Mars Sample Return mission, has decided to bid the project out to the private sector. The project will involve sending a craft to Mars to collect samples of rocks that have been gathered by the Perseverance Rover, and then bringing that craft back to Earth.
It’s been a bit of a quiet week in AI, but both Meta and Google have released new papers with methodologies for millions (or more) context windows for language models. Soon, language models will likely be able to reason over a body of text equivalent to, for example, all the words a person speaks aloud in their lifetime. Or every US federal law, simultaneously. Plan accordingly.
The Main Idea
“Ethnicity and tribe began, by definition, where sovereignty and taxes ended. The ethnic zone was feared and stigmatized by state rhetoric precisely because it was beyond its grasp and therefore an example of defiance and an ever-present temptation to those who might wish to evade the state.”
-James C. Scott, The Art of Not Being Governed: An Anarchist History of Upland Southeast Asia (2009)
“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”
-John Perry Barlow, “A Declaration of the Independence of Cyberspace” (1996)
“The allegations before us today are a far cry from the facts of Halberstam. Rather than dealing with a serial burglar and his live-in partner-in-crime, we are faced with international terrorist networks and world-spanning internet platforms.”
-Justice Clarence Thomas, majority opinion, Twitter, Inc. v. Taamneh et al. (2023)
I began this project in January to help articulate a Constitutional and classically liberal vision for governing society in a period of rapid technological change. That remains my central goal. As I’ve wandered about the intellectual landscape, however, I realized that there is another issue, just beneath the surface, that at once fascinates and vexes me: Jurisdiction. Who, exactly, governs the internet, and how far does their authority extend?
This question comes up all the time, but few people put it quite this bluntly. Perhaps it is uncomfortable to do so. The truth is it that it is one of the most substantial unanswered questions of our time.
When the iPhone was released in 2007, there was, famously, no way for developers other than Apple to create apps. The App Store came a year later, and with it came a long list of rules and guidelines. Why? In Apple’s words:
The guiding principle of the App Store is simple—we want to provide a safe experience for users to get apps and a great opportunity for all developers to be successful. We do this by offering a highly curated App Store where every app is reviewed by experts and an editorial team helps users discover new apps every day. We also scan each app for malware and other software that may impact user safety, security, and privacy.
Apple’s experience was informed by decades of experience with the PC, where malware, spyware, and other forms of malicious software proliferated wildly. In the early 2000s, it was conventional wisdom that a smart PC user should regularly wipe their hard drive and start from a clean operating system; it was the only reliable way to clear away the software barnacles. Third-party toolbars would install themselves in web browsers (most infamously, Internet Explorer), slowing down computers, spying on user’s internet activity, and creating a litany of problems.
These annoyances and vulnerabilities were regarded as simply a cost of doing business, a fact of life for any device connected to the internet. And it was: this outcome is a tradeoff of openness. The goal of the App Store was to avoid repeating this mistake, and it accomplished this by being far less open than Windows, the Mac, and Linux had been.
It worked. Users can and do install apps and buy digital goods from developers they have not heard of or vetted. Malware is far harder to install by accident, particularly on iOS, where other, non-App Store security mechanisms create a “sandbox” inside which third-party apps must remain. Independent developers built careers on the App Store, and numerous companies built multi-billion dollar businesses on top of it.
As software ate the world, Apple’s (and Google’s) role as steward of this platform came to resemble a governmental function. The two companies’ policies, which often proceeded more or less in lock step with one another, set the rules for a large portion of the digital economy. “Do we need an FDA for social media?,” some asked; the truth is we already had (and have) one in these firms and their app store policies.
Some governments used this dynamic to their advantage. When Indian policymakers wanted to ban TikTok, their job was fairly easy to carry out: simply command Apple and Google to forbid TikTok on the Indian app stores. When the Chinese government sought to restrict the use of virtual private networks (VPNs), which Chinese citizens used to evade the government’s internet censorship, again, the task was not so tough, at least as it applied to smartphones.
Western policymakers, on the other hand, were displeased. Policing of App Stores struck many as a quasi-governmental function, and policymakers, understandably, felt left out of the picture. Because of intellectual path dependency, we have chosen to think of this as an antitrust issue, but at its core, it’s a dispute about who has legitimate authority to govern the internet, or at least a major part of it.
Of course, Western policymakers don’t actually want to do the work of governing the internet. It’s hard—the political capital equivalent of owning a boat: fun in theory, expensive in fact. Policymakers in the US are further constrained by the First Amendment, which places serious limits on the government’s ability to police many forms of digital conduct (though this is an evolving area of the law).
So instead of creating “an FDA for social media” (a bad idea, by the way), the plan thus far seems to be to more explicitly delegate governmental authority over the internet to large tech firms. Witness the European Union’s Digital Markets Act, which, among many other things, requires Apple to allow third-party app stores and apps downloaded directly from the internet. But EU policymakers did not want to accept the tradeoffs of openness we witnessed with PCs in the 90s and early 2000s: Some degree of policing for illicit content and malware was necessary. The EU, however, was not interested in assuming that responsibility themselves, so instead they required Apple to serve as a regulator of the third-party app stores.
Governments, then, in the EU scheme, are a kind of meta-regulator, sitting on top of corporations who in turn serve as the real regulators—often against their will. If the EU doesn’t like the companies’ regulations, they can fine the companies up to 10% of global revenue (20% for subsequent offenses). This amounts to, for Apple alone, $38 billion, eclipsing the annual budget of some European member state governments.
US policymakers are pursuing a similar path, though more modestly. In Florida and other states, Governors are signing into law bills requiring age verification for social media platforms. Because there is no reliable form of government-issued ID for the internet, the job of verifying ages (and, in some states, verifying the identities of parents to provide consent for their children to use social media) is left to social media companies.
What if age verification for children proves difficult or impossible in some circumstances? What if it is similarly challenging to associate parents and children? What if children find out that they can use VPNs to make it seem as though they are in a different state (or country)? State governments are silent on such matters, except for one thing: just like the EU, they’ll sue you if they don’t like your plan.
The age verification laws will likely be litigated before they go into effect (indeed, some already are, and the litigation may well end up invalidating all or most of these laws). But the broader point stands: the government is unwilling or unable to take up the responsibilities its laws imply, so they delegate them to technology companies. When the technology companies fail (or are perceived to fail) at these quasi-governmental functions, they get sued, fined, or both.
This does not strike me as a desirable or sustainable dynamic. It’s hard to imagine any company (certainly any non-European company) wishing to create a new operating system or personal computing device for European customers. It’s not quite as hard to imagine new social media platforms emerging, but with the regulatory burdens of age verification and other laws under consideration, it’s getting more challenging by the year.
Perhaps even more importantly, though, it’s entirely unclear whether any of these laws will work. The Digital Markets Act probably will not create competition for much of anything that matters; in all likelihood, it will just lead to another round of government-mandated pop-ups, banners, set-up screens, and the like, for European smartphone users. Any intelligent child will probably be able to get around the state-level age verification laws using VPNs. The lesson many children may take from that is, in stylized form, “the government passes laws to restrict our internet access, but they’re not savvy enough to stop us in practice.” Any child who does skirt the age verification to access, say, Instagram, will be, by definition, “outside the law.” What will this teach them about the value of obeying laws in the future? What will this teach them about the competence of our government?
There is a fundamental incoherence to the way internet and technology regulation has unfolded in the past decade. Unfortunately, this appears to be the best Western governments can come up with. Even more unfortunately, all of this is a mere dress rehearsal for what comes next: technology firms are mechanizing intelligence more and more every week, and the capabilities that will enable for the young and the old, the law-abiders and the criminals, are likely to surprise us all.
I hope we can do better as the stakes become bigger (and the stakes are not small now). I suspect the future does involve substantial public private cooperation, much as it does today in law enforcement in the financial sector. The difference, of course, is that technology companies and the internet mediate huge portions of our day-to-day lives. As a result the balance is harder to strike: Would you like your experience with computers to be more, or less, like interacting with the retail banking sector?
At the moment, we seem to be sleepwalking into this public-private regulatory dynamic. Policymakers and the broader policymaking apparatus seem dimly aware of it. On the one hand, for example, some policy analysts suggest that the thorny task of age verification for users should be handled by Google and Apple in their roles as purveyors of Android and iOS, effectively canonizing in law their positions as dominant technology platforms. Yet those same people (I do not wish to name names, so no links here, but trust me—the same people) also advocate for antitrust enforcement against Apple and Google for their alleged monopolistic positions in the smartphone market. If Apple, Google, Meta, Microsoft are going to be deputized as quasi-state actors, we should at least do so with our eyes open and, frankly, in a spirit of cooperation.
I, for one, am skeptical of taking this arrangement too far. Over the past several decades, Congress has deputized many of its core functions to better funded and more knowledgeable executive branch agencies. The result has been unaccountable bureaucracies, unpredictable enforcement of law, and a host of other ills. I foresee similarly bad consequences if government deputizes core state functions to better funded and more knowledgeable technology companies. These are short cuts. The actual solution is to improve state capacity.
In the context of the internet and AI, improving state capacity will require governments that can build new capabilities and add to our collective base of knowledge, rather than simply wag their fingers and expect someone else to do the hard part. More fundamentally, and even more difficult, will be the need for policymakers to recognize that the internet is not their unambiguous jurisdiction, especially if we are to maintain our commitment to freedom of speech, association, and individual choice. The internet is a force to be contended with, not a territory to command at will.
While I agree with your premise as a concept, the real question is how? Even if policymakers were made aware, how does one scale up government capacity to affect the Internet? It is a default of government to move slowly, almost glacially. Contrast this to the super information highway, which thrives off of open source, high-speed solutions. With AI, the bar of competence in order to circumvent existing blocks or protections has been further lowered, leading to some pretty empowered script kiddies.
The result feels like the US government of the mid 1800's trying to assert its authority over the Wild West, or China of the last century industrializing so fast that entire blocks of the regulatory code were missed.
Who can keep up, and how?