Added Berjon interview
This commit is contained in:
BIN
assets/headshots/robin_berjon.jpg
Normal file
BIN
assets/headshots/robin_berjon.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 1.8 MiB |
252
content/interviews/berjon-web_standards.md
Normal file
252
content/interviews/berjon-web_standards.md
Normal file
@@ -0,0 +1,252 @@
|
||||
---
|
||||
narrator: Robin Berjon
|
||||
subject: Web standards
|
||||
facilitator: Nathan Schneider
|
||||
date: 2025-08-01
|
||||
approved: 2025-08-04
|
||||
summary: "The standards that govern the World Wide Web develop at the intersection of profit-seeking companies, nonprofit organizations, and small groups of people with rarefied expertise."
|
||||
location: "Brussels, Belgium"
|
||||
headshot: "robin_berjon.jpg"
|
||||
topics: [decentralization, open source, organization, software, standards]
|
||||
links:
|
||||
- text: "Personal website"
|
||||
url: "https://berjon.com"
|
||||
- text: "Social media"
|
||||
url: "https://robin.berjon.com"
|
||||
---
|
||||
|
||||
*How do you prefer to introduce yourself?*
|
||||
|
||||
I'm Robin Berjon, and I generally describe myself as a technologist working on issues of governance. It's sort of fuzzy and blurry, but that's basically what I'm doing.
|
||||
|
||||
*How did that journey begin for you, and when?*
|
||||
|
||||
It began more or less in the mid-1990s when I was a first-year philosophy student. I got a computer to do philosophy essay writing and homework on. Instead of doing philosophy, I started doing a lot of web things. I'd seen computers before, but they never had people in them. That was immediately fascinating to me.
|
||||
|
||||
I built a website that came second in a website competition. I got hooked on that and started making my own tiny web company. Not a family-money kind of thing, but just a tiny thing with my roommate. We basically had a computer between the two of us and started making websites for people. I would do nights, he would do days, because we only had the one computer.
|
||||
|
||||
That collapsed within a few months because we had no idea what we were doing. But I got the bug for it and started another company in Belgium that worked significantly better. I started working more consistently in tech from then on. I started asking questions: "Hey, this HTML thing is nice, but I would like it to work differently. Who do you have to ask? How does this thing work? Where does it even come from?"
|
||||
|
||||
There was this weird organization called the W3C, or World Wide Web Consortium, where apparently people discussed these things and started agreeing on how they would work. It was complicated to observe from the outside. Back then, as someone who wasn't a paid member, you couldn't get in. You could only send feedback from the outside and maybe receive an answer within a week or two.
|
||||
|
||||
I started getting interested in how you change the styling of things and eventually started scratching at that. In 2001, I was invited to participate in SVG---Scalable Vector Graphics, a file format. In 2002, I got a proper job where I was doing standards. Then I did a lot of standards for the following decades.
|
||||
|
||||
*Can you say a bit about what drew you into the SVG process? What were you engaged in there?*
|
||||
|
||||
What I liked with SVG is that it was a very powerful graphics environment. With JavaScript and all that, you could start to represent anything, from documents to games. You could use it as a rendering layer on the web.
|
||||
|
||||
I started building SVG things as part of my job. We were still making websites for other people. I started adding SVG to projects when possible, which was way too cutting edge back then because you couldn't use it. Not enough people had the SVG capabilities on their computer. I had to constantly find customers who would be interested, so I ended up working, for instance, with the French---I don't know what it's called---the people who manage the road network at the level of France. They had these very complicated mapping requirements. They had this antique database with all kinds of weird conventions, and they wanted to bring that into a more modern world. They exported to XML and then wanted to turn that into something else. I started doing all these things around SVG.
|
||||
|
||||
Eventually, since it was a relatively small community, the working group noticed and brought me in as an invited expert without having to pay for membership.
|
||||
|
||||
*What is the business model for working on standards? Why is it valuable for a company---and in particular, the cases that you were starting out with---to pay someone to work on abstract rules for the whole ecosystem?*
|
||||
|
||||
That's a perennial question, and I don't think anyone has a definitive answer, or at least not an answer that works in all contexts. For that specific company, it was a small startup in Paris. When you're small and you have a very specialized area of knowledge, you need to create markets for things. You need to create some stability to improve your credibility.
|
||||
|
||||
What they had was a binary XML format primarily focused on optimization of transport at a very infrastructural level. The kind of customer that could adopt that thing would be a large telco or TV broadcaster---those kinds of big companies. But you're not going to get a large telco to adopt something deep in its infrastructural stack that's made by a company of twelve people in France with no proven business model.
|
||||
|
||||
In order to solve that issue, the company was very interested in developing standards that included or referenced or made use somehow of their technology so that they would have something credible to offer these large companies. Of course, there's a flipside to that---they had to open up the technology and share it with others.
|
||||
|
||||
It was a trade-off. If we keep everything completely proprietary, well, it's ours---and they had patents and everything that was still done a lot at the time---but then they could have no customers, so not wonderful. Or they could agree to open it up at least some and share it with others, and then get customers. That was essentially the play they made.
|
||||
|
||||
They were also interested in figuring out avenues in which their technology could be used. Even though they were not directly working on SVG themselves, they were very interested in the potential bridging between their technology and SVG. So they allowed me to continue working on the SVG working group. When I was with that company, I made quite a few projects that used subsets of SVG in embedded devices with tiny screens and limited processing capabilities.
|
||||
|
||||
*How did your involvement in W3C develop? This was a long-term process. How did it stick for you?*
|
||||
|
||||
What really made it stick for me was that back then it was a fun community. I felt at the time that there were a lot of shared values. People were there from all over---a lot of people who didn't have any formal training in computer science or anything. It was very different from trying to talk to people who were old school professional programmers or people who went to engineering schools. Those tended to look down on web people at the time.
|
||||
|
||||
In W3C, you made friends with someone, and they'd be, for instance, a history major---it was a very cobbled together, motley-crew community at the time. People were talking about building the web, and how amazing it was going to be, and all the cool things that we could do.
|
||||
|
||||
There's also the thing that---there was this sort of whiplash thing where I was super young and I didn't come from anywhere particularly interesting. Just because I was specialized on this super specific thing, all of a sudden people were flying me to Australia and Japan to talk about my work. This tiny bit of expertise that I randomly developed immediately became weirdly relevant in ways that I absolutely hadn't anticipated. As a twenty-year-old, it's exciting---"Wait, I get to fly to Japan?"
|
||||
|
||||
*Just to paint a picture of what is happening here: Are these conversations taking place largely in in-person meetings? Are they taking place largely on email lists or things like that? Where is this discussion occurring? Where are these dynamics unfolding?*
|
||||
|
||||
It changed over time. If we're talking about the early 2000s, most of the conversations were on mailing lists and in chat on IRC. You would get to know what people looked like only when you finally met them, but otherwise you had no idea what they looked like. A lot of the time the groups would have a weekly or every other week, or maybe monthly---it depends---phone call. That's very different in terms of focus and difficulty compared to video calls, where you can see the person speaking. These were a grind. They were really difficult, especially if you're working internationally. Everyone has accents all over the place.
|
||||
|
||||
The tooling was interesting in those days because W3C had better tooling than other contexts. For instance, they had their own phone bridge---a physical phone---and that was driven by a laptop that had Windows 3.1, I think, on it. The one guy at W3C who was a really good hacker had bridged it to IRC. So you could be in IRC. A lot of the time you didn't know who was speaking unless you recognized everyone's voice. It was tricky. You could ask the bot, "Who's speaking?" You could get the bot to mute or unmute people. There was a whole lot of tooling that worked that way.
|
||||
|
||||
I think my first in-person meeting at W3C was the first meeting of that group when they decided they had sufficiently good Internet at the meeting place that they didn't need to bring a server with the email archives. Normally, for the meetings, someone would come with an actual computer that would be the email archives where the group discussions had taken place, because you need to refer to them. That's where the issues are and things like that. They would basically plug it into a local LAN, and everyone in the room could read the email archives on location.
|
||||
|
||||
It's not like that anymore. Nowadays it's all GitHub issues and stuff like that.
|
||||
|
||||
*How did the relationship progress between standards development and your day jobs?*
|
||||
|
||||
I worked at that company for several years, and within that time I started chairing a working group, then a second working group, then being editor of several things. I was on the Advisory Committee of W3C. I basically said yes to every opportunity, which was not necessarily very intelligent in terms of time management, but it was so attractive, and so interesting, that I couldn't say no.
|
||||
|
||||
Then, because I became a specialist, I was more hireable in that space. So there was this loop. The first job I got after that was with a company that wanted to build a video system where everything was standards-based. The entire environment was---the entire application was built around Mozilla Gecko. It was a fork of XULRunner, for those who remember. The entire user interface was HTML and CSS and SVG, and the data backend was RDF. The whole thing was super standards-centric. Again, building a completely different set of products with a completely different focus, but it still involved the same building blocks.
|
||||
|
||||
After that company, I started my own consultancy, working for other companies as a standards specialist. I was usually referred to as tech strategist or something because the idea was really that you would come in and take in the business strategy that existed, and figure out how that mapped onto standards participation or more generally tech development. The web was still very confusing to many companies. Precisely for the kind of question that you asked initially---how does it make sense? What is the approach? How does it work?
|
||||
|
||||
I did that for several years, and in part I worked for a lot of tiny startups that had often a very specific goal: "We're building this. We're trying to understand the role of standards." In those cases, twenty percent of one person is a huge investment for them. So they were figuring out, "Is it worth it if we do this? What's the most effective way of doing it?"
|
||||
|
||||
At the other end of the spectrum, I also had massive multinational companies---Vodafone or Samsung, Canon---these really big companies. A lot of the time, they had very much the same questions.
|
||||
|
||||
One of the things that I always remember, and that explains a lot about the power structures in standards---I remember being contracted by Samsung HQ, which is rare. They didn't bring in people from abroad to the Korean HQ very often. They brought me in, and there were all these people with very important-sounding titles, half of them from tech units and half of them from strategy.
|
||||
|
||||
Essentially they opened by saying, "We have a problem we would like you to solve, which is that we don't think that Samsung can have any influence in web standards." This is from a company---you have to imagine, I was sitting in a part of the world that's called Samsung City, because it's a city where they have their own police force, their own supermarket. All the buildings are built by Samsung construction, they're insured by Samsung insurance. The whole thing---this is a massive William Gibson kind of futuristic massive corporation world, and then they go, "We don't think we can have influence on web standards. Can you help us, Mr. One-Person Company from nowhere?"
|
||||
|
||||
It was true, because even though they had, I think at the time, something like an 8 percent market share with Samsung Internet Browser, they had no idea how to use that as political leverage or influence in the standards process---how to put people in the right positions, how to bring in a developer perspective. It was very much a hardware company, and hardware companies do not understand a sort of very agile, fluffy, and imprecise software world. It was baffling.
|
||||
|
||||
The kind of process that you have at Canon, for instance---they would design chips two years in advance, and have a full specification with tests, and everything. The idea that you could just rock up there and code up a feature on the thing and just ship it---and then, oops, it's buggy, sorry we'll just fix it---was something that was outside of that sphere. Those interactions were super interesting in seeing how worlds collided there.
|
||||
|
||||
*In these kinds of interactions, did you ever feel a sense of divided loyalty between the interests of the company that you might be working for or consulting with, and the organization---the standards themselves, the broader ecosystem?*
|
||||
|
||||
That's interesting because I think I pretty much always managed to dodge the issue. It's an hourglass communication system. You're this tiny, very small thing, with an organization on one side and an organization on the other, and you're the entire point of contact between the two. You can sort of represent---without lying---the information from one side to the other and back.
|
||||
|
||||
One thing that is pretty clear is you could be Samsung, and you still have no decision power over what W3C will agree to. You have influence if you play your cards right, but the whole thing---if you want to have influence, you can't be a total asshole. When something would come up that might trigger divided loyalties, I would say, "Yeah, that's very interesting. But I'm not sure I could get it adopted by the community, because that is the kind of thing that they disagree with"---without saying, "I disagree!" and without going into a group being a mercenary. You could make that representation, which wasn't lying in the sense that it was true. It's very hard---at least, it was particularly hard at a time when this process was not captured by a few big companies---to make a change without getting significant political support from multiple people, and that generally meant having to align with the values in one sense or another.
|
||||
|
||||
*Did everyone else operate that way?*
|
||||
|
||||
No, but it's certainly true that---and this is something I explained to my customers as a freelancer---you can't be a mercenary and be credible. What the group is interested in is not the fact that you're representing Company A, B, or C. The group is interested in your expertise as the person who's in the room. If for the first six months you work for Samsung and you're saying, "Yeah, we really need this feature," and then two weeks later, you move to Canon, and you're saying, "Oh, no screw that feature. It's a really bad idea"---your credibility is shot. So the group won't be interested in what you have to say anymore---or very occasionally, when they wonder what the companies think. But in terms of expertise, it wouldn't work. That is something I would always explain to clients initially. People understand it well. It's your expertise that is valued, and that is how you bring influence. But you can't just snap your fingers and make things happen.
|
||||
|
||||
There is one company that did try. I didn't even start working for them, but they tried to basically get me to abandon my previous customers because they wanted an exclusive deal. They promised a lot of money and were saying, "We want you to work for us full time starting next week. We don't want to wait for the three or four months"---or however much was left on my previous contract. "We'll give you money to match or more than you would make from them to compensate, you just have to ditch them this weekend."
|
||||
|
||||
I said no, in part because that's not how I work, and in part also because I don't want to work for people who operate that way. But also, you're talking about a tiny pool of potential customers. Even from a purely self-interested perspective, that's the kind of thing that would have shot my credibility right away.
|
||||
|
||||
*You mentioned the sense that something has changed from this period. What years are we talking about when the standards processes you were involved in were more distributed---felt more like something that required widespread buy-in---and then walk us into the story of capture that you alluded to.*
|
||||
|
||||
The years I'm talking about are---when did I stop consulting? I stopped consulting in 2015. It goes more or less up to then-ish. It's not like there was a sea-change moment where one day everything was fine, and the next day everything was captured. It happened gradually. I think a lot of us were frogs boiled in that water quite progressively before people started noticing. There's still people who are starting to notice today. So there's a spectrum. But in those initial years, from say 2000 to 2015, things were more balanced.
|
||||
|
||||
Eventually, it comes down to what the enforcement mechanisms are. We're talking about what's called voluntary standards. In theory, it's standards that you would only adopt if you want to adopt them, and otherwise you can ignore them. Of course, that's never really---it's rarely really the case. But the enforcement mechanism for standards-making and standards adoption was the market. You could rely on market discipline. If most of the players have agreed on the standard and you haven't, you're just going to lose out on the market. For that to work, it's a very convenient enforcement mechanism, because you don't, as an institution, have to do any of the enforcement work. That's what always makes the market so attractive---it's just, yeah, like magic.
|
||||
|
||||
But, of course, that assumes that there's competition. The moment competition disappears, the moment the market ceases to have any kind of disciplining power, then you lose that factor. Voluntary standards start to lose the ability to operate as shared standards.
|
||||
|
||||
In terms of capture, it wasn't immediate, but the gradual focus on only doing the browser engine part---so really just the rectangle inside the browser chrome, and nothing else---was very much driven by that increasing power by certain players. They didn't want us to standardize search protocols, or e-commerce protocols, or advertising, or anything in the higher layer that they could see as capturable.
|
||||
|
||||
There was a push to focus just on this, and it was always presented in terms of, "That's the specificity of the organization. That's what we're good at. That's where we can drive interoperability. Let's leave the rest to *innovation*"---which I think is always a red flag. It gradually got to this place where now it's pretty much only those standards. The only companies that have a say are the ones who have implementation power. That's mostly Google, Apple, and a tiny weeny bit of Mozilla.
|
||||
|
||||
*Around 2015, you switched out of consulting. Was that because of the changes that were taking place? What brought about that shift for you? And where did it lead you?*
|
||||
|
||||
No, it wasn't because of those changes. It was still early enough in the transformation that I either hadn't noticed, or it wasn't bothering me yet. But starting from 2012, mid-2012-ish, my primary customer became W3C itself. I was still formally a consultant, but during mid-2012 to 2015 I was paid in part by MIT, and in part by Keio University in Tokyo, to work as part of the W3C team editing the HTML5 standard. We had this situation in which W3C had gone down a bad direction with HTML, trying to make it all about XML. That was very unpopular with developers and browser vendors. So the browser vendors sort of forked and went to build their own HTML in the WHATWG. Then there was an attempt to bring everyone back around the same table, because that was silly. I was hired to be the W3C part of that sort of rejoining-the-people.
|
||||
|
||||
Part of it was editing the actual spec, bringing it to completion. I think we were given a mandate to close it in two years, and it took twenty-six months, which was good, because no one believed the two years was possible. That was very aggressive. I think we had 400 issues when we started.
|
||||
|
||||
Part of it was to be a diplomat and rejoin those two communities from the bottom up. We knew there were people who wouldn't ever get along, ever, but it felt that it was possible to drive alignment, and that actually did work. But to answer your initial question: in 2015, I was really done with that. It was a grueling process, and I was ready to work on something that was not standards at all. So I went to work for a startup, as CTO doing some product work.
|
||||
|
||||
*As you reflect on processes that affect things we experience on the web all the time, do you see traces of those processes---and of your handiwork---in your experience of the web today? How does somebody who has been there in the room and on those lists throughout that period experience the web differently from somebody who wasn't?*
|
||||
|
||||
There's several things. There are times I see features and I'm thinking, "Oh, yeah, I remember when we were talking about that." It's something silly, and it's not important---doesn't necessarily affect my experience of the web. But you see it in there. It often comes when, say, maybe there's a bug in a video UI thing inside the browser, and I'm thinking, "Oh, that's clearly because they didn't set the whatever-attribute correctly." I remember we talked about how that would might go wrong, the trade-offs, et cetera. So you have an extra level of understanding of what's going on. That jumps out at weird times.
|
||||
|
||||
There is another thing where I think, more than a decade later, a sense of "Hey, we got that right." One of the hard projects that got off the ground as part of this HTML thing---the project I really relied on to bridge communities---is Web Platform Tests.
|
||||
|
||||
Everyone hates writing tests. It's a drag. You have to go through the spec with a fine-tooth comb and find all the corner cases of any given statement, write code that matches every single thing, and then run it, and then look at what happens---the whole thing is terrible. But also when it's there, and it works, you have actual interoperability.
|
||||
|
||||
Before the HTML thing I was doing, every specification had a separate test suite. They used different frameworks. A lot of them did it as a checkbox exercise---so the entire SVG test suite, I think, was 180 tests, or something like that, which is ridiculous. It's tiny. Groups would do it once to get approval to move forward with the standard, and then they would never touch it again, and no one would use it. So you got all these interoperability problems from lack of testing.
|
||||
|
||||
We decided to get serious about it. I basically went and took everyone's test suite. I didn't ask permission. I took everyone's test suite, dumped them in this massive repo and started running really horrendous Perl code to replace the ad hoc frameworks everyone had, and put them all in the same kind of test framework---which some other guy had written, and it was very usable and very good. That's how we started having a unified test suite for the entire web platform.
|
||||
|
||||
Today, I think it has two million tests or something like that. It's still really big. It's operated in production by every single large browser vendor. So any change you make to any browser will go through the test suite, and if it adds a failure, or whatever, it will notify them.
|
||||
|
||||
I still get little sparks of excitement from that thing whenever I write a relatively complex web thing. I'm on Firefox. When I develop, I only look at Firefox, and then at the end, I usually look at other browsers. That's when you're thinking, "Oh, yeah, I've forgotten that Chrome doesn't support this or whatever." But sometimes you do it, and it just works the same everywhere.
|
||||
|
||||
It's hard to convey just how incredibly hard it is to get a reproducible execution of something as complex as HTML plus CSS plus JavaScript, plus all those APIs---this is an insanely complex platform. The fact that you can write something complicated, and it works the same in completely independently implemented browsers, still sort of gives me a bit of goosebumps. Just by doing that, we saved---there are about twenty million web developers worldwide I think. We saved all of these people so many hours each, and they've been able to build better things for it. You still see the value in that.
|
||||
|
||||
*Amazing. But after that process you moved on to a startup and then also, later, the New York Times. Tell us a little bit about those experiences.*
|
||||
|
||||
The startup was very startup-y---five or six of us in a room that was probably ten feet by twelve or something like that. It was called science.ai, and the goal of the startup was to fix scholarly publishing---which, as you are well aware, we didn't succeed in doing. But the tech stack was very interesting. I still think that with more money and better strategic decisions, it could have succeeded, but it didn't.
|
||||
|
||||
One thing that was interesting is I was running away from standards, and I managed to do only product work for about a year at that startup. Then it became very clear that we would need to do standards work for what we wanted to achieve. Because if you're building replacement document formats for scholarly publishing, and you're talking to a Wiley or Elsevier, et cetera---once again, you're not going to say, "Hey, please use this crazy little thing that these five people did." You have to document it and start standardizing it.
|
||||
|
||||
I tried to avoid doing too much of it, so there was a strategy of doing enough to make them happy, but not a full standards project. But we used a lot of Schema.org to make sure that we were grounded in an ontology that was maintained elsewhere. Then we had this project that brought together what I called HTML vernaculars. The idea was that you could do specialized versions of HTML that would map to a specific domain. You would constrain the HTML in specific ways and also enhance it in specific ways. We constrained it to just be the kind of content that you would have in the scholarly article, which is already quite broad, and then enhance it with all these semantic annotations from Schema.org so that you could say that this figure is this type of figure, and it was authored by this person, who is different from the authors of the paper. We wrote a spec called Scholarly HTML around that. But I was still trying to stay away from standards.
|
||||
|
||||
Then at the Times again I managed to go, I think, two or three years without doing standards work. But at some point we needed it for strategic reasons. Google was trying to change how advertising was working and doing this whole "Privacy Sandbox" stuff. The Times needed to be in that room and be in those conversations, and since I was the person doing privacy and strategy around data and tech, that fell to me.
|
||||
|
||||
The Times was really trying to push for this world in which you had only one data controller. When someone interacts with you as a first-party website, only you as the website control the data, even if you work with other people---you're still the driver.
|
||||
|
||||
That's why we worked on GPC, the Global Privacy Control. With the law that was emerging in California, I wrote the spec specifically for the technical signal in browsers to match the law. Because this needed to move forward and they needed someone who understood standards for that. So yeah, it tends to catch up to me. Right now, I'm in a phase where I'm really trying not to do standards. But I'm not sure how successful I'll be.
|
||||
|
||||
*That was a case of standards as regulatory compliance. Is that something that had been a big part of your story before? Or was that something new at that point because new regulations were coming online?*
|
||||
|
||||
It was pretty new at that point. It's not the first such thing, but it's definitely the first that I was involved, and it's still not a big thing. I think it should be a much bigger thing. I think there's huge promise in using standards processes to complement the work of regulators. But this was---and just to give you a sense for how hard it is to bring lawmakers and technologists in the same room to align on a standard---this is a one-bit standard. This is a standard for the transmission of a single bit over HTTP, which is a well-known protocol. Several years in, it is adopted but not yet ratified. So there's the whole human component of getting all these interests---the business interests and regulation---to align so that you get a standard. It is pretty challenging.
|
||||
|
||||
*To pick up on the earlier story of capture, you mentioned that different people in the community discovered that there were frogs in boiling water at different times. Can you describe a moment or a process when you started to change your perspective on what was going on?*
|
||||
|
||||
For me, it really was working at the New York Times that helped me realize we had a problem. Before that, if you'd asked me, I would have said, "Yeah, those Google people---I mean, clearly, they're not very good at privacy, that's not a thing they do well. But I've met a lot of them, and they mean well. They're really trying to do something, and it's complicated. I'll be the first to point out their failures and all that. But overall, it's looking pretty well."
|
||||
|
||||
Then I got to see how those tech monopolies treat the media, including pretty powerful media companies. You'd think that the New York Times would have a say. But really what they get is fake deference. It's like the tech companies will send twenty people to the meeting to tell you you're important, but then they won't change anything. The constant arrogance of those tech monopolies---where they assume that if you work for a media organization, you don't understand technology. The people would explain very silly things. I saw that any change I was trying to make to push technologists or people in the standards world towards solutions that would work better for the media would stop moving. You could push a little bit, and then you'd feel a massive resistance.
|
||||
|
||||
For instance, one of the things I was interested in at the Times was not doing AMP, because AMP takes your content away and publishes it on google.com instead. You no longer get data, and it's basically Google---
|
||||
|
||||
*What does AMP stand for?*
|
||||
|
||||
Accelerated Mobile Pages. So it's the whole idea that, because of performance and because the open web has to beat the mobile native apps, you have to give all your content to Google, who will publish it for you.
|
||||
|
||||
*Facebook was doing that, too, right?*
|
||||
|
||||
Yeah, so AMP was the most aggressive one by far. But yeah, Facebook had something called Facebook Instant Articles that was horrendously, badly designed. Clearly it was one person's job to figure out the format, and they had never built a format in their lives before. But Facebook didn't care. Facebook doesn't care about tech quality. They just care about shipping.
|
||||
|
||||
Apple also has the thing that they use for Apple News, which is also not really great. None of them thought to reuse an existing thing---maybe RSS. Google was very aggressive in pushing it, though, because if you didn't do AMP, you couldn't be in the AMP carousel, which means you couldn't be at the top of the search results.
|
||||
|
||||
They kept saying, "Oh, it doesn't help for your ranking, because it doesn't change the ranking. It just puts you more at the top." Yeah, okay, so it's not ranking except it's the only way to be in the top position. Gotcha. I could see how everything would get locked down if you tried to push back.
|
||||
|
||||
That was one of the ways I almost went back into standards when I was at the Times. I started talking about, "Hey, how about we standardize ways of doing content aggregation such that publishers have a say about how it works, and we can make it work in a way that doesn't push everything to Google?" Everything ground to a halt. You could see that all the avenues of discussion would freeze up. I was thinking, "Okay, yeah, I know who's doing that."
|
||||
|
||||
*After the Times, you shifted to a different kind of organization. You were starting to work with organizations that---for instance, Protocol Labs, or the IPFS Foundation---were not just businesses using standards. They're organizations that are trying to build protocols rather than the platform model that big tech companies were involved in. Could you talk a bit about that transition?*
|
||||
|
||||
I went there because that's what I was looking for. After five years at the Times, I felt that it was not possible to move the web, either from inside standards organizations, or from significant businesses that were not themselves big tech.
|
||||
|
||||
I didn't want to go to big tech with the hope of changing things from the inside, because I've seen too many people do that over the years, and nothing ever changes. Then you have all these people who are smart somewhere on the inside, but who keep justifying things that are less and less justifiable. They are basically frogs boiling themselves. So I didn't want to be one of those people.
|
||||
|
||||
It wasn't easy to find a place where my skill sets would work---but at the same time not be a complete blockchain thing, and still be adjacent to this dWeb and web3 world. I really didn't want to do a five-person startup again. I didn't feel I had the energy after all that. So I landed at Protocol Labs.
|
||||
|
||||
It was a very chaotic company, I have to say. But there was a very significant community of people who also wanted to do what I wanted to do. Even though I wouldn't say that anything that we built at that time has had massive commercial success yet, the sort of excitement and research and experimentation that happened there is starting to bring dividends today in terms of better protocols that are built on good ideas.
|
||||
|
||||
That's a lot of what I've been focusing on at the IPFS Foundation. To give you a bit of context, IPFS was invented, I think, in 2013, 2014---ten-ish years ago. It was this way of doing content addressing. But over the years, many cooks were involved, and also it sort of worked on the principle that it needed a lot of optionality to work in different contexts. While that made it very flexible, it also made it almost very challenging to implement well, and it made it very hard to build anything on top of that you could expect interoperability from.
|
||||
|
||||
What I've been working on---at the tail end of this crazy few years of experimentation---is, okay, how can we make these ideas more usable? A project called DASL ("dazzle") is in the process of taking this and eliminating all the options, eliminating everything that's not reliable and just picking one. Even if it's controversial, it doesn't matter. Sometimes there's no good choice. You just pick one making these tiny specs that can easily be reusable by other protocols. The AT Protocol that underlies Bluesky uses DASL under the hood for data, for CIDs---for content identifiers and for packaging.
|
||||
|
||||
I think that there is something to the basic idea of data that can be self-certifying. You can have linked sets of content addressed data. I think it changes the kind of governance that you can build on top of the system compared to something that uses a more traditional domain name authority.
|
||||
|
||||
*Can you explain what the goal of IPFS is? Protocol Labs is trying to build an economic layer on top of that, I know. It's an addressing scheme, but to what end?*
|
||||
|
||||
There's so many different ways of describing it and all of them are partial truths.
|
||||
|
||||
The first thing I always explain is that IPFS stands for InterPlanetary File System, and it is neither interplanetary nor a file system. On the first part, the interplanetary part, there is a satellite in Low Earth Orbit that is conducting IPFS-related experiments. So that's as far as the interplanetarity goes. In terms of the file system, well, a file system---when you tell people you have a file system, they expect you to give them something like the Finder or whatever, a directory browser. You put a file there, and it's going to be there. If you come from another machine to the same file system because it's interplanetary, you're going to find that file, which you generally won't in IPFS.
|
||||
|
||||
IPFS is essentially a suite of protocols to retrieve data in a content-addressed manner. So content-addressed means that the address of a piece of data is derived from its hash. So it's derived from its content. The retrieval method for that can be---it's very open-ended in IPFS. There is this thing called the IPFS Principles that actually celebrates the fact that it's open-ended. I mean, it's great that it's open-ended. But that doesn't always help people building apps.
|
||||
|
||||
The core---the most typical way of retrieving IPFS content---is that there's this global distributed hash table, a DHT. Anyone who wants to expose data on the IPFS protocol through that network basically says, "Hey, I have this content, and here are the hashes for that content." When you connect to the distributed hash table, if you have the hash for something you want, you can use that distributed hash table in a peer-to-peer fashion, using libp2p or something to find who is actually providing that data. It could be multiple people on the network---and then you fetch it from them.
|
||||
|
||||
*What will ordinary users be able to do with this that they can't do now?*
|
||||
|
||||
I think it's not so much about what users can do directly, in terms of user interface. In my mind, forgetting the specificities of IPFS, but really thinking in terms of content addressing and self-certifying data, it really is about the kinds of governance systems that you can build on top of this.
|
||||
|
||||
One way of thinking about that---I always tend to think of protocols in terms of Elinor Ostrom, ADICO, and institutional analysis, and all that. If you think of how data works in Web 2.0, for instance, where the authority for any information you have is grounded in the Domain Name System, you know it's a true thing, or it's authoritative in the sense that you got it from the horse's mouth.
|
||||
|
||||
For instance, if we're on Twitter---the only way I can know that I'm reading a tweet from you on Twitter is by trusting that Twitter really received that from you, verified that it's from you, and then is giving me something untransformed. But technically, they could go in---you tweeted "I love cats," and they could go in and just replace "cats" with "dogs" and show that to me. You could tell me that it's not true, but authoritatively, Twitter is telling me that. That is architecturally part of the HTTP protocol. It's part of how we've built the web. Any institutional arrangement you build on top of that has to build in that trust of a specific party. It becomes this control point of power for all kinds of interactions you might want to build. That creates bottlenecks, and it increases the institutional complexity of what you're building.
|
||||
|
||||
If you switch from that to a system that's content-addressed---where you know you're getting the right thing because you can always verify that you got the right thing---you know the data is correct intrinsically, without needing to ask anyone else. There's no other authority involved.
|
||||
|
||||
On top of that, you can---because it's all hashed and deterministic and all self-certifying---you can also add a signature layer. If I know that you have a specific key, you can sign that content and say, "It's from me." I can then have a thing that's a hash that has the content and the signature embedded in it. I know all of these things come together. It's a real statement from you. It has its own authority. Then, because you have the content identifiers that are basically links between various things, you can have a graph.
|
||||
|
||||
I have this thing that is content-addressed, so I know the content is correct. It's signed, and it's also referencing all these other things. I know that these references are correct, and therefore I can follow this thing and know what I'm getting without any third-party authority.
|
||||
|
||||
Just to add a small point on top of that, in terms of the institutions that you can build: it creates a lot more flexibility. For instance, say again that we're in a social media environment, and you want to create a feed generator, and that feed generator has content from arbitrary people. Normally, I would have to trust you not to transform that content. But in this case you can't. If you transform it, the thing becomes invalid. This means that you can create your own thing, and I don't have to worry about what you're doing other than maybe I'm interested in the governance of how the content gets in. But all the other nitty-gritty of the data itself is taken care of.
|
||||
|
||||
To my mind, what matters are the things you can build on that. It means that you switch to a system where you can have an institution over here dealing with identity, an institution over there dealing with data storage, one here that produces feeds, and another here that does search---and you don't need to integrate them. They can operate separately. A good separation of concerns makes them simpler, and they can remain trustworthy in terms of what you see. That's really the goal of these things. It's not, "Hey, you can now do crazy AI, with whatever-super-gradient-looking features." You can build a new world. That's really what I'm interested in.
|
||||
|
||||
*How does the work of building protocols for building a new world compare to working in standards organizations with big companies? I mean, in both cases, you're trying to build a kind of rule book, but I imagine it's a very different kind of process.*
|
||||
|
||||
It's different. But you always end up having a bunch of geeks in a discussion channel explaining technology to one another. Very quickly, you get interest from---not maybe the Googles or the Apples, but significantly larger companies start to get involved relatively quickly, because if you have something with promise, and you can demonstrate that promise, they come in.
|
||||
|
||||
But one thing that's different is---and it might not be an actual difference, it's more like a time shift. The vibe is much closer to what it was like to do web standards in the early 2000s. We have meetings that have maybe ten people and it's super friendly. It's relatively informal. We know that we're a small group who understand these things, and that there's not many other people who understand them. It's not a point of pride, but it creates a bond. You keep having these conversations where you're thinking, "No, no, I promise you---self-certifying data is something that transforms the governance of digital---" and outside that group, those are conversations that are hard to have. Because you have to give twenty years of background and a bit of computer science about what hashes are---because you have something new that no one has explained to the world yet.
|
||||
|
||||
It makes those meetings very nice, because you're thinking, "Oh, for the next hour, I can just kick back and just say things plainly the way they are in my brain without having to provide seven layers of explanation." It's also very interesting, because people build cool, small things that they demo to one another, which is something I haven't seen in a while. It used to be that on the web---"Hey, I made this crazy table. Look how cool it is. It's all pixelated!"
|
||||
|
||||
For instance, there's this streaming service that is all around self-certifying data. All the video blocks are self-certified, and they create the giant Merkle tree. The whole thing is crazy from a technical level. But that guy could explain it, and he joined one of the meetings, and within two minutes, you could see he understood that everyone knew what he was talking about. His eyes lit up, and he was thinking, "Oh, baby!" He would start talking about how they have this guy who now broadcasts 24/7 streaming, and they don't know how big you can make a Merkle tree of video fragments---you really get that vibe.
|
||||
|
||||
I really think it's a time thing. If we do it right and this is the next world, which I hope it is, at some point we're going to screw up and there'll be a new oligarchy. The question is how long can we---how slow can we make the capture process? I think by building better fundamentals in there, we can make it slower. We can enable much more democratic powers, and hopefully, instead of a twenty-year or fifteen-year path to oligarchy, we can get a two-hundred-year path to oligarchy---make it the problem for our great-great-grandchildren.
|
||||
|
||||
*It's something to aspire to. Based on these lessons---in some respects you described what you're doing as returning to where you started and trying again---can you say a bit about the lessons you've learned? Not only for your own work, but what do you try to impart among the twenty-year-olds showing up in these spaces, and having the kind of excitement that you had when you first entered the web standards world?*
|
||||
|
||||
I try not to pontificate at the twenty-year-olds too much, in part because they wouldn't listen anyway. But if there's something that I think has become really important in understanding how to build these systems, it is this idea that technology is politics by other means.
|
||||
|
||||
A lot of what got us to fail in the previous iteration is we were a community that was very much a product of the 1990s. Neoliberalism---great, it works. We built these systems where---and you see that in all the "splinternet" discourse---fragmentation was always bad. You have only two levels, the global and the individual. Anything that intervenes in between is bad. It's going to slow you down, it's going to be a problem. So you build these systems such that you have that global standard for everything. You make it good because you're "ethical," and you have the "right values." Then individuals use it, and they have some choice. That's it. So we really built a system that reflects that. The current Internet governance institutions still reflect very much that mindset.
|
||||
|
||||
If that had been on purpose---people consciously trying to build exactly that system---then, fine. I mean, I would politically disagree. But at least you could say that this was done on purpose. It wasn't. It was done by default, through lack of understanding of the mechanics involved in building this.
|
||||
|
||||
Really, now, I'm very adamant about the idea that this is political project. What we're building is democracy. In the same way that what makes science work is democracy, the project here is a democratic project. It's a political project through and through.
|
||||
|
||||
We have to stop seeing it as a defensive thing. There's a lot in the IPFS world that's very much about censorship resistance---the idea that you have an attacker from the outside, and you're protecting against that. But you're not proposing anything positive. I really trying to ground this in a capabilities approach, looking at what capabilities we're giving people. Again, self-certifying data gives you the capability of building something such that you can trust the data, no matter what institutional structure you put around it. The thing I would bang people on the head about is an old joke that I made many years ago: if you got into tech because you didn't like politics, now you have two problems.
|
||||
|
||||
That is the core of it. I think people who are interested in the architecture of technology and of protocols today should take the time to familiarize themselves with subsidiarity, polycentricity---basically, how institutions work, how democracy works. It's not about voting or capture resistance. Then you could build much, much better protocols from that. That would be my lesson learned. I'm sorry it took me twenty fucking years to get to that point before I started, but you gotta start sometime.
|
Reference in New Issue
Block a user