vendoring essay

This commit is contained in:
Carson Gross 2025-01-27 10:48:02 -07:00
parent 8ed85cdb22
commit 63975c72fe
5 changed files with 595 additions and 0 deletions

View File

@ -57,6 +57,7 @@ page_template = "essay.html"
* [Why htmx Does Not Have a Build Step](@/essays/no-build-step.md)
* [Is htmx Just Another JavaScript Framework?](@/essays/is-htmx-another-javascript-framework.md)
* [htmx Implementation Deep Dive (Video)](https://www.youtube.com/watch?v=javGxN-h9VQ)
* [Vendoring](@/essays/vendoring.md)
### Hypermedia Research
@ -68,6 +69,18 @@ page_template = "essay.html"
* [Hypermedia Controls: Feral to Formal (ACM HT'24)](https://dl.acm.org/doi/pdf/10.1145/3648188.3675127)
* [Preserving REST-ful Visibility Of Rich Web Applications With Generalized Hypermedia Controls (ACM SIGWEB Newsletter, Autumn'24)](https://hypermedia.cs.montana.edu/papers/preserving-restful.pdf)
### Interviews
* [Henning Koch](@/essays/interviews/henning_koch.md), creator of [Unpoly](https://unpoly.com/)
[//]: # (* [Makinde Adeagbo](@/essays/interviews/makinde_adeagbo.md), creator of [Primer](https://www.youtube.com/watch?v=wHlyLEPtL9o))
[//]: # (* [Chris Wanstrath aka @defunkt](@/essays/interviews/chris_wanstrath.md), creator of [pjax](https://github.com/defunkt/jquery-pjax))
[//]: # (* [Mike Amundsen](@/essays/interviews/mike_amundsen.md), author of [RESTful Web APIs](http://restfulwebapis.com/))
## Banners
<div style="text-align: center;margin:32px">
<img width="90%" loading="lazy" src="/img/createdwith.jpeg">

View File

@ -0,0 +1,164 @@
+++
title = "An interview with Chris Wanstrath aka @defunkt, Creator of pjax"
date = 2025-01-27
updated = 2025-01-27
[taxonomies]
author = ["Carson Gross"]
tag = ["posts"]
+++
# An Interview with defunkt, creator of pjax
I'm very excited to be able to interview @defunkt, the author of [pjax](https://github.com/defunkt/jquery-pjax), an
early hypermedia-oriented javascript library that served as an inspiration for intercooler.js, which later became
htmx. He's done a few other things too, like co-founding github, but in this interview I want to focus on pjax, how it
came to be, what influenced it and what it in turn influenced.
Thank you for agreeing to an interview @defunkt!
Q: To begin with, why don't you give the readers a bit of your background both professionally & technically:
> I think I can sum up most of my technical background in two quick anecdotes:
>
> 1. For "show and tell" in 6th grade, I brought in a printout of a web page I had made - including its source code. I
> like to imagine that everyone was impressed.
>
> 2. Right after 7th grade, a bunch of rowdy high schoolers took me down to the local university during a Linux
> installfest and put Red Hat on my family's old PC. That became my main computer for all of high school.
>
> So pretty much from the start I was a web-slinging, UNIX-loving hippie.
>
> In terms of coding, I started on QBasic using the IBM PC running OS/2 in my grandparents' basement. Then I got deep into
> MUDs (and MUSHes and MUXes and MOOs...) which were written in C and usually included their own custom scripting
> language. Writing C was "hardcoding", writing scripts was "softcoding". I had no idea what I was doing in C, but I
> really liked the softcoding aspect.
>
> The same rowdy high schoolers who introduced me to Linux gave me the O'Reilly camel book and told me to learn Perl. I
> did not enjoy it. But they also showed me php3, and suddenly it all came together: HTML combined with MUD-like
> softcoding. I was hooked.
>
> I tried other things like ASP 3.0 and Visual Basic, but ultimately PHP was my jam for all of high school. I loved making
> dynamic webpages, and I loved Linux servers. My friends and I had a comedy website in high school that shall remain
> nameless, and I wrote the whole mysql/php backend myself before blogging software was popular. It was so much fun.
>
> My first year of college I switched to Gentoo and became fascinated with their package manager, which was written in
> Python. You could write real Linux tools with it, which was amazing, but at the time the web story felt weak.
>
> I bought the huge Python O'Reilly book and was making my way through it when, randomly, I discovered Ruby on Rails. It
> hit me like a bolt of lightning and suddenly my PHP and Python days were over.
>
> At the same time, Web 2.0 had just been coined and JavaScript was, like, "Hey, everyone. I've been here all along." So
> as I was learning Rails, I was also learning JavaScript. Rails had helpers to abstract the JS away, but I actually
> really liked the language (mostly) and wanted to learn it without relying on a framework or library.
>
> The combination of administering my own Linux servers, writing backend code in Rails, and writing frontend code in
> JavaScript made me fall deeper in love with the web as a platform and exposed me to concepts like REST and HATEOAS.
> Which, as someone who had been writing HTML for over a decade, felt natural and made sense.
>
> GitHub launched in 2008 powered by, surprise, Gentoo, Rails, and JavaScript. But due to GitHub's position as not just a
> Rails community, but a collection of programming communities, I quickly evolved into a massive polyglot.
>
> I went back and learned Python, competing in a few programming competitions like Django Dash and attending (and
> speaking) at different PyCons. I learned Objective-C and made Mac (and later iPhone) apps. I learned Scheme and Lisp,
> eventually switching to Emacs from Vim and writing tons of Emacs Lisp. I went back and learned what all the sigils mean
> in Perl. Then Lua, Java, C++, C, even C# - I wanted to try everything.
>
> And I'm still that way today. I've written projects in Go, Rust, Haskell, OCaml, F#, all sorts of Lisps (Chicken Scheme,
> Clojure, Racket, Gambit), and more. I've written a dozen programming languages, including a few that can actually do
> something. Right now I'm learning Zig.
>
> But I always go back to the web. It's why I created the Atom text editor using web technologies, it's why Electron
> exists, and it's why I just cofounded the Ladybird Browser Initiative with Andreas Kling to develop the independent,
> open source Ladybird web browser.
Q: Can you give me the history of how pjax came to be?
> It all starts with XMLHttpRequest, of course. Ajax. When I was growing up, walking to school both ways uphill in the
> snow, the web was simple: you clicked on a link and a new web page loaded. Nothing fancy. It was a thing of beauty, and
> it was good.
>
> Then folks started building email clients and all sorts of application-like programs in HTML using `<frames>` and
> friends. It was not very beautiful, and not very good, but there was something there.
>
> Luckily, in the mid-2000s, Gmail and Ajax changed things. Hotmail had been around for a while, but Gmail was fast. By
> updating content without a full page load using XMLHttpRequest, you could make a webpage that felt like a desktop
> application without resorting to frames or other chicanery. And while other sites had used Ajax before Gmail, Gmail
> became so popular that it really put this technique on the map.
>
> Soon Ajax, along with the ability to add rounded corners to web pages, ushered in the era known as Web 2.0. By 2010,
> more and more web developers were pushing more and more of their code into JavaScript and loading dynamic content with
> Ajax. There was just one problem: in the original, good model of the web, each page had a unique URL that you could use
> to load its content in any context. This is one of the innovations of the web. When using Ajax, however, the URL doesn't
> change. And even worse, it can't be changed - not the part that gets read by the server, anyway. The web was broken.
>
> As is tradition, developers created hacks to work around this limitation. The era of the #! began, pioneered by
> Ajax-heavy sites like Facebook and Twitter. Instead of http://twitter.com/htmx_org, you'd
> see http://twitter.com/#!/htmx_org in your browser's URL bar when visiting someone's profile. The # was traditionally
> used for anchor tags, to link to a sub-section within a full web page, and could be modified by JavaScript. These
> ancient web 2.0 developers took advantage of #'s malleability and started using it to represent permanent content that
> could be updated inline, much like a real URL. The only problem was that your server code never saw the # part of a URL
> when serving a request, so now you needed to start changing your backend architecture to make everything work.
>
> Oh, and it was all very buggy. That was a problem too.
>
> As an HTTP purist, I detested the #!. But I didn't have a better way.
>
> Time passed and lo, a solution appeared. One magical day, the #!s quietly disappeared from Facebook, replaced by good
> old fashioned URLs. Had they abandoned Web 2.0? No... they had found a better way.
>
> The `history.pushState()` function, along with its sibling `history.replaceState()`, had been recently added to all
> major web browsers. Facebook quickly took advantage of this new API to update the full URL in your browser whenever
> changing content via Ajax, returning the web to its previous glory.
>
> And so there it was: the Missing Link.
>
> We had our solution, but now a new problem: GitHub was not an SPA, and I didn't want it to be one. By 2011 I had been
> writing JavaScript for six years - more than enough time to know that too much JS is a terrible thing. The original
> GitHub Issue Tracker was a Gmail-style web application built entirely in JS, circa 2009. It was an awful experience for
> me, GitHub developers, and, ultimately, our users.
>
> That said, I still believed Ajax could dramatically speed up a web page's user interface and improve the overall
> experience. I just didn't want to do it by writing lots of, or any, JavaScript. I liked the simple request/response
> paradigm that the web was built on.
>
> Thus, Pjax was born. It sped up GitHub's UI by loading new pages via Ajax instead of full page loads, correctly updating
> URLs while not requiring any JS beyond the Pjax library itself. Our developers could just tag a link with `[data-pjax]`
> and our backend application would then automatically render a page's content without any layout, quickly getting you
> just the data you need without asking the browser to reload any JS or CSS or HTML that didn't need to change. It also (
> mostly) worked with the back button, just like regular web pages, and it had a JS API if you did need to dip into the
> dark side and write something custom.
>
> The first commit to Pjax was Feb 26, 2011 and it was released publicly in late March 2011, after we had been using it to
> power GitHub.com for some time.
Q: I recall it being a big deal in the rails community. Did the advent of turbolinks hurt adoption there?
> My goal wasn't really adoption of the library. If it was, I probably would have put in the work to decouple it from
> jQuery. At the time, I was deep in building GitHub and wasn't the best steward of my many existing open source projects.
>
> What I wanted instead was adoption of the idea - I wanted people to know about `pushState()`, and I wanted people to
> know there were ways to build websites other than just doing everything by hand in JavaScript. Rendering pages in whole
> or in part on the server was still viable, and could be sped up using modern techniques.
>
> Turbolinks being created and integrated into Rails was amazing to see, and not entirely unsurprising. I was a huge fan
> of Sam Stephenson's work even pre-GitHub, and we had very similiar ideas about HTTP and the web. Part of my thinking was
> influenced by him and the Rails community, and part of what drew me to the Rails community was the shared ideas around
> what's great about the web.
>
> Besides being coupled to jQuery, pjax's approach was quite limited. It was a simple library. I knew that other people
> could take it further, and I'm glad they did.
Q: How much "theory" was there to pjax? Did you think much about hypermedia, REST, etc. when you were building it? (
I backed into the theory after I had built intercooler, curious how it went for you!)
> Not much. It started by appending `?pjax=1` to every request, but before release we switched it to send an `X-PJAX`
> header instead. Very fancy.
>
> Early GitHub developer Rick Olson (@technoweenie), also from the Rails community, was the person who introduced me to
> HATEOAS and drove that philosophy in GitHub's API. So anything good about Pjax came from him and Josh Peek, another
> early Rails-er.
>
> My focus was mostly on the user experience, the developer experience, and trying to stick to what made the web great.
- First commit: https://github.com/defunkt/jquery-pjax/commit/3efcc3c
- X-PJAX: https://github.com/defunkt/jquery-pjax/commit/4367ec9

View File

@ -0,0 +1,55 @@
+++
title = "An interview with Makinde Adeagbo, Creator of Primer"
date = 2025-01-27
updated = 2025-01-27
[taxonomies]
author = ["Carson Gross"]
tag = ["posts"]
+++
# An Interview with Makinde Adeagbo, creator of primer (at Facebook)
I'm delighted to be able to interview Makinde Adeagbo, one of the creators of [Primer](https://www.youtube.com/watch?v=wHlyLEPtL9o),
an hypermedia-oriented javascript library that was being used at Facebook in the 2000s.
Thank you for agreeing to an interview!
Heres your interview response with corrections for typos, grammar, and clarity, while keeping your tone natural and conversational:
Q: To begin with, why dont you give the readers a bit of your background both professionally & technically?
>Ive always been into tech. In high school, I used to build computers for friends and family. I took the computer science classes my high school offered and went on to study computer science in college. I was always amazed by the fact that I could build cool things—games, tools, etc.—with just a computer and an internet connection.
>
>I was lucky enough to participate in Explore Microsoft, an internship that identifies underrepresented college freshmen and gives them a shot at working at Microsoft. After that experience, I was sold on software as my future. I later interned at Apple and Microsoft again. During college, I also worked at Facebook when the company was about 150 employees. It was an incredible experience where engineers had near-total freedom to build and contribute to the companys growth. It was exactly what I needed early in my career, and I thrived. From there, I went on to work at Dropbox and Pinterest and also co-founded the nonprofit, /dev/color.
Q: Can you give me the history of how Primer came to be?
>In 2010, the Facebook website was sloooow. This wasnt the fault of any specific person—each engineer was adding features and, along the way, small amounts of JavaScript. However, we didnt have a coherent system for sharing libraries or tracking how much JavaScript was being shipped with each page. Over time, this led to the 90th-percentile page load time ballooning to about 10 seconds! Midway through the year, reducing that load time by half became one of the companys three top priorities. I was on a small team of engineers tasked with making it happen.
>
>As we investigated where most of the JavaScript was coming from, we noticed the majority of it was performing simple tasks. These tasks involved either fetching additional data or markup from the server, or submitting a form and then receiving more markup to update the page. With limited time, we decided to build a small solution to abstract those patterns and reduce the amount of code needed on the page.
>
>Tom Occhino and I built the first version of Primer and converted a few use cases ourselves to ensure it worked well. Once we were confident, we brought more engineers into the effort to scale it across the codebase.
Q: Primer & React were both created at Facebook. Was there any internal competition or discussion between the teams? What did that look like?
>The two projects came from different eras, needs, and parts of the codebase. As far as I know, there was never any competition between them.
>
>Primer worked well for the type of website we were building in 2010. A key part of its success was understanding that it wasnt meant to handle every use case. It was an 80/20 solution, and we didnt use it for particularly complex interactions (like the interface for creating a new post).
>
>React emerged from a completely different challenge: the ads tools. Managing, composing, and tracking hundreds of ads required a highly involved, complex interface. Im not sure if they ever attempted to use Primer for it, but it would have been a miserable experience. We didnt have the terminology at the time, but this was a classic example of a single-page application needing purpose-built tools. The users of that site also had a very different profile from someone browsing their home feed or clicking through photos.
Q: Why do you think Primer ultimately failed at Facebook?
>I dont think theres any single technical solution that has spanned 15 years in Facebooks platform. The sites needs evolve, technology changes, and the internets landscape shifts over time. Primer served the site well for its time and constraints, but eventually, the product demanded richer interactivity, which wasnt what Primer was designed for.
>
>Other tradeoffs also come into play: developer ease/speed, security, scalability. These priorities and tradeoffs change over time, especially as a company grows 10x in size.
>
>More broadly, these things tend to work in cycles in the industry. Streamlined, fast solutions give way to richer, heavier tools, which eventually cycle back to streamlined and fast. I wouldnt be surprised if something like Primer made a comeback at some point.
Q: How much “theory” was there to Primer? Did you think much about hypermedia, REST, etc., when you were building it?
>Not much. Honestly, I was young and didnt know a ton about the internets history or past research. I was drawn to the simplicity of the webs underlying building blocks and thought it was fun to use those tools as they were designed. But, as always, the web is a layer cake of hacks and bandaids, so you have to be flexible.
Q: What were the most important technical lessons you took away from Primer?
>Honestly, the biggest lessons were about people. Building a system like Primer is one thing, but for it to succeed, you have to train hundreds of engineers to use it. You have to teach them to think differently about building things, ask questions at the right time, and avoid going too far in the wrong direction. At the end of the day, even if the system is perfect, if engineers hate using it, it wont succeed.

View File

@ -0,0 +1,156 @@
+++
title = "An interview with Mike Amundsen, Author of 'RESTful Web APIs'"
date = 2025-01-27
updated = 2025-01-27
[taxonomies]
author = ["Carson Gross"]
tag = ["posts"]
+++
# Hypermedia: The Important Parts
Mike Amundsen is a computer programmer, author and speaker, and is one of the world leading experts on REST &
hypermedia. He has been writing about REST and Hypermedia since 2008 and has published two books on the ideas:
* [RESTful Web APIs](http://restfulwebapis.com/)
* [Building Hypermedia APIs with HTML and Node](http://www.dpbolvw.net/click-7269430-11260198?sid=HP&url=http%3A%2F%2Fshop.oreilly.com%2Fproduct%2F0636920020530.do%3Fcmp%3Daf-prog-book-product_cj_9781449306571_%25zp&cjsku=0636920020530)
Mike agreed to do an interview with me on his view of the history of hypermedia and where things are today.
**Q**: The “standard” history of hypermedia is Vannevar Bushs “As We May Think”, followed by Nelson introducing
the term “hypermedia” in 1963, Englebarts “Mother of all Demos” in 1968 and then Berners-Lee creating The Web in 1990\.
Are there any other important points you see along the way?
> I think starting the history of what I call the “modern web” with Bush makes a lot of sense. Primarily because you can
> directly link Bush to Engelbart to Nelson to Berners-Lee to Fielding. Thats more than half a century of scholarship,
> design, and implementation that we can study, learn from, and expand upon.
>
> At the same time, I think there is an unsung hero in the hypermedia story; one stretches back to the early 20th century.
> I am referring to the Belgian author and entrepreneur [Paul Otlet](https://en.wikipedia.org/wiki/Paul_Otlet). Otlet had
> a vision of [a multimedia information system](https://monoskop.org/Mundaneum_symposium) he named the “World Wide
> Network”. He saw how we could combine text, audio, and video into a mix of live and on-demand replay of content from
> around the world. He even envisioned a kind of multimedia workstation that supported searching, storing, and playing
> content in what was the earliest instance I can find of an understanding of what we call “streaming services” today.
>
> To back all this up, he
> created [a community of researchers](https://daily.jstor.org/internet-before-internet-paul-otlet/) that would read
> monographs, articles, and books then summarize them to fit on a page or less. He then designed an identification
> system much like our URI/URN/URLs today and created a massive card catalog system to enable searching and collating
> the results into a package that could be shared even by postal service with recipients. He created web search by
> mail in the 1920s\!
>
> This was a man well ahead of his time that Id like to see talked about more in hypermedia and information system
> circles.
**Question**: Why do you think that The Web won over other hypermedia systems (such as Xanadu)?
> The short reason is, I think, that Xanadu was a much more detailed and specific way of thinking about linking documents,
> documenting provenance, and compensating authors. Thats a grand vision that was difficult to implement back in the 60s
> and 70s when Nelson was sharing his ideas.
>
> There are, of course, lots of other factors. Berners-Lees vision was much smaller (he was trying to make it easy for
> CERN staff to share contact information\!). Berners-Lee was, I think, much more pragmatic about the implementation
> details. He himself said he used existing tech (DNS, packet networking, etc.) to implement his ideas. That meant he
> attracted interest from lots of different communities (telephone, information systems, computing, networking, etc.).
>
> I would also say here that I wish [Wendy Hall](https://en.wikipedia.org/wiki/Wendy_Hall)
> s [Microcosm](https://www.sciencefriday.com/articles/the-woman-who-linked-the-web-in-a-microcosm/) had gotten more
> traction than it did. Hall and her colleagues built an incredibly rich hypermedia system in the 90s and released it
> before Berners-Lees version of “the Web” was available. And Halls Microcosm held more closely to the way Bush,
> Englebart, and Nelson thought hypermedia systems would be implemented primarily by storing the hyperlinks in a
> separate “anchor document” instead of in the source document itself.
**Question**: What do you think of my essay “How did REST come to mean the opposite of REST”? Are there any points you
disagree with in it?
> I read that piece back in 2022 when you released it and enjoyed it. While I have nothing to quibble with, really, there
> are a few observations I can share.
>
> I think I see most hypermedia developers/researchers go through a kind of cycle where you get exposed to “common” REST,
> then later learn of “Fielding's REST” and then go back to the “common REST” world with your gained knowledge and try to
> get others on board; usually with only a small bit of success.
>
> I know you like memes so, Ill add mine here. This journey away from home, into expanded knowledge and the return to the
> mundane life you once led is to me just another example of Campbells Heros Journey\<g\>. I feel this so strongly
> that I created [my own Heros Journey presentation](http://amundsen.com/talks/2015-05-barcelona/index.html) to deliver
> at API conferences over the years.
>
> On a more direct note. I think many readers of Fielding's Dissertation (for those who actually read it) miss some key
> points. Fieldings paper is about designing network architecture, not about REST. REST is offered as a real-world
> example but it is just that; an example of his approach to information network design. There have been other designs
> from the same school (UC Irvine) including Justin Erenkrantzs Computational
> REST ([CREST](https://www.erenkrantz.com/CREST/)) and Rohit Kares Asynchronous REST (A-REST). These were efforts that
> got the message of Fielding: “Lets design networked software systems\!
>
> But that is much more abstract work that most real-world developers need to deal with. They have to get code out the
> door and up and running quickly and consistently. Fieldings work, he admitted, was on
> the “[scale of decades](https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-724)” a scale
> most developers are not paid to consider.
>
> In the long run, I think it amazing that a PhD dissertation from almost a quarter-century ago has had such a strong
> influence on day-to-day developers. Thats pretty rare.
**Question**: Hyperview, the mobile hypermedia that Adam Stepinski created, was very explicitly based on your books.
Have you looked at his system?
> I have looked over [Hyperview](https://hyperview.org/) and like what I see. I must admit, however, that I dont write
> mobile code anymore so Ive not actually written any hyperview code myself. But I like it.
>
> I talked to Adam in 2022 about Hyperview in general and was impressed with his thoughts. Id like to see more people
> talking about and using the Hyperview approach.
>
> Something I am pretty sure I mentioned to Adam at the time is that Hyperview reminds me of Wireless Markup
> Language ([WML](https://en.wikipedia.org/wiki/Wireless_Markup_Language)). This was another XML-based document model
> aimed at rendering early web content on feature phones (before smartphone technology). Another XML-based hypermedia
> domain-specific document format is [VoiceXML](https://en.wikipedia.org/wiki/VoiceXML). I still think there are great
> applications of hypermedia-based domain-specific markup languages (DSML) and would like to see more of them in use.
**Question**: Its perhaps wishful thinking, but I feel there is a resurgence in interest in the ideas of hypermedia and
REST (real REST.) Are you seeing this as well? Do you have a sense if businesses are starting to recognize the
strengths of this approach?
> I, myself, think there is a growth in hypermedia-inspired designs and implementations and Im glad to see it. I think
> much of the work of APIs in general has been leading the market to start thinking about how to lower the barrier of
> entry for using and interoperating with remote, independent services. And the hypermedia control paradigm (the one you
> and your colleagues talk about in your
> paper “[Hypermedia Controls: Feral to Formal](https://dl.acm.org/doi/fullHtml/10.1145/3648188.3675127)”) offers an
> excellent way to do that.
>
> I think the biggest hurdle for using more hypermedia in business is
> was [laid out pretty conclusively](https://www.crummy.com/writing/speaking/2015-RESTFest/)
> by [Leonard Richardson](https://www.crummy.com/self/) several years ago. He helped build a
> powerful [hypermedia-based book-sharing server and client](https://opds.io/) system to support public libraries around
> the world. He noted that, in the library domain, each site is not a competitor but a partner. That means libraries are
> encouraged to make it easier to loan out books and interoperate with other libraries.
>
> Most businesses operate on the opposite model. They typically succeed by creating barriers of entry and by hoarding
> assets, not sharing them. Hypermedia makes it easier to share and interact without the need of central control or other
> types of “gatekeeping.”
>
> Having said that, I think a ripe territory for increased use of hypermedia to lower the bar and increase interaction is
> at the enterprise level in large organizations. Most big companies spend huge amounts of money building and rebuilding
> interfaces in order to improve their internal information system. I cant help but think designing and implementing
> hypermedia-driven solutions would yield long-term savings, and near-term sustainable interoperability.
**Question**: Are there any concepts in hypermedia that you think we are sleeping on? Or, maybe said another way, some
older ideas that are worth looking at again?
> Well, as I just mentioned, I think hypermedia has a big role to play in the field of interoperability. And I think the
> API-era has, in some ways, distracted us from the power of hypermedia controls as a design element for
> service-to-service interactions.
>
> While I think Nelson, Beners-Lee and others have done a great job of laying out the possibilities for human-to-machine
> interaction, I think weve lost sight of the possibilities hypermedia gives us for machine-to-machine interactions. I am
> surprised we dont have more hypermedia-driven workflow systems available today.
>
> And I think the rise in popularity of LLM-driven automation is another great opportunity to create hypermedia-based,
> composable services that can be “orchestrated” on the fly. I am worried that well get too tied up in trying to make
> generative AI systems look and act like human users and miss the chance to design hypermedia workflow designed
> specifically to take advantage of the strengths of statistical language models.
>
> Ive seen some interesting things in this area including [Zdenek Nemec](https://www.linkedin.com/in/zdne/)s
> [Superface](https://superface.ai/) project which has been working on this hypermedia-driven workflow for several
> years.
>
> I just think there are lots of opportunities to apply what weve learned from the last 100 years (when you include
> Otlet) of hypermedia thinking. And Im looking forward to seeing what comes next.

View File

@ -0,0 +1,207 @@
+++
title = "Vendoring"
date = 2022-05-01
updated = 2022-05-01
[taxonomies]
author = ["Carson Gross"]
tag = ["posts"]
+++
"Vendoring" software is a technique where you copy the source of another project directly into your own project. It is
an old technique that has been used for time immemorial in software development, but the term "vendoring" to
describe it appears to have originated in the [ruby community](https://stackoverflow.com/posts/72115282/revisions).
Vendoring can be and is still used today. It can be done with htmx, for example, quite easily. Assuming you have a
`/js/vendor` directory in your project, you can just download the source into your own project like so:
```bash
curl https://raw.githubusercontent.com/bigskysoftware/htmx/refs/tags/v2.0.4/dist/htmx.min.js > /js/vendor/htmx-2.0.4.min.js
```
You then include the library in your `head` tag:
```html
<script src="/js/vendor/htmx-2.0.4.min.js"></script>
```
And then check the library source into your own source control repository.
That's it, that's vendoring.
## Vendoring Strengths
OK, great, so what are some strengths of vendoring libraries like this?
It turns out there are quite a few:
* Your entire project is checked in to your source repository, so no external systems beyond your source control need
to be involved when building it
* Vendoring dramaticaly improves dependency *visibility*: you can _see_ all the code your project depends on, so you
won't have a situation like we have in htmx, where we feel like we only have a few development dependencies, whe in
fact we may have a lot
* This also means if you have a good debugger, you can step into the library code as easily as any other code. You
can also read it, learn from it and even modify it if necessary.
* From a security perspective, you aren't relying on opaque code. Even if your package manager has a
an integrity hash system, the actual code may be opaque to you. With vendored code it is checked in and can be
analysed automatically or by a security team.
* Personally, it has always seemed crazy to me that people will often resolve dependencies at deployment time, right
when your software is about to go out the door. If that bothers you, like it does me, vendoring puts a stop to it.
On the other hand, vendoring also has one massive drawback: there typically isn't a good way to deal with what is called
the [transitive dependency](https://en.wikipedia.org/wiki/Transitive_closure) problem.
If htmx had sub-dependencies, that is, other libraries that it depended on, then to vendor it properly you would have to
start vendoring all those libraries as well. And if those dependencies had further dependencies, you'd need to install
them as well... And on and on.
Worse, two dependencies might depend on the same library, and you'll need to make sure you get the
[correct version](https://en.wikipedia.org/wiki/Dependency_hell) of that library for everything to work.
This can get pretty difficult to deal with, but I want to make a paradoxical claim that this weakness (and, again, it's
a real one) is actually a strength in some way:
Because dealing with large numbers of dependencies is difficult, vendoring encourages a culture of _independence_.
You get more of what you make easy, and if you make dependencies easy, you get more of them. Making dependencies,
_especially_ transitive dependencies, more difficult would make them less common.
And, as we will see in a bit, maybe fewer dependencies isn't such a bad thing.
## Dependency Managers
That's great and all, but there are [significant](https://gist.github.com/datagrok/8577287)
[drawbacks](https://web.archive.org/web/20180216205752/http://blog.bithound.io/why-we-stopped-vendoring-our-npm-dependencies/)
to vendoring, particular the transitive dependency problem.
Modern software engineering uses dependency managers to deal with the dependencies of software projects. These tools
allow you to specify your projects dependencies, typically via some sort of file. They then they will install those
dependencies and resolve and manage all the other dependencies that are necessary for those dependencies to work.
One of the most widely used package managers is NPM: The [Node Package Manager](https://www.npmjs.com/). Despite having
no runtime dependencies, htmx uses NPM to specify 16 development dependencies. Development dependencies are dependencies
that are necessary for development of htmx, but not for running it. You can see the dependencies at the bottom of
the NPM [`package.json`](https://github.com/bigskysoftware/htmx/blob/master/package.json) file for the project.
Dependency managers are a crucial part of modern software development and many developers today couldn't imagine
writing software without them.
### The Trouble with Dependency Managers
So dependency managers solve the transitive dependency problem that vendoring has. But, as with everything in software
engineering, there are tradeoffs associated with them. To see some of these tradeoffs, let's take a look at the
[`package-lock.json`](https://github.com/bigskysoftware/htmx/blob/master/package-lock.json) file in htmx.
NPM generates a `package-log.json` file that contains the resolved transitive closure of dependencies for a project, with
the concrete versions of those dependencies. This helps ensure that the same dependencies are used unless an user
explicitly updates them.
If you take a look at the `package-log.json` for htmx, you will find that the original 13 development dependencies have
ballooned into a total of 411 dependencies when all is said and done.
htmx, it turns out, relies on a huge number of packages, despite priding itself on being a relatively lean. In fact,
the `node_modules` folder in htmx is a whopping 110 megabytes!
But, beyond this bloat there are deeper problems lurking in that mass of dependencies.
While writing this essay I found that htmx apparently depends on the
[`array.prototype.findlastindex`](https://www.npmjs.com/package/array.prototype.findlastindex), a
[polyfill](https://en.wikipedia.org/wiki/Polyfill_(programming)) for a JavaScript feature introduced in
[2022](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/findLastIndex).
Now, [htmx 1.x](https://v1.htmx.org/) is IE compatible, and I don't *want* polyfills for _anything_: I want to write
code that will work in IE without any additional library support. And yet a polyfill has snuck in via a chain
of dependencies (htmx does not directly rely on it) that introduces a dangerous polyfill that would let me write
code that would break in IE, as well as other older browsers.
This polyfill may or may not be available when I run the htmx [test suite](https://htmx.org/test/) (it's hard to tell)
but that's the point: some dangerous code has snuck into my project without me even knowing it, due to the number
and complexity of the (development) dependencies it has.
This demonstrates significant _cultural_ problem with dependency managers:
They tend to foster a culture of, well, dependency.
A spectacular example of this was the infamous [left-pad incident](https://en.wikipedia.org/wiki/Npm_left-pad_incident),
in which an engineer took down a widely used package and broke the build at companies like Facebook, PayPal, Netflix,
etc.
That was a relatively innocuous, although splashy, issue, but a more serious concern is
[supply chain attacks](https://en.wikipedia.org/wiki/Supply_chain_attack), where a hostile entity is able to compromise
a company via code injected unwittingly via dependencies.
The larger our dependency graph gets, the worse these problems get.
## Dependencies Reconsidered
I'm not the only person thinking about our culture of dependency. Here's what some other, smarter folks have to say
about it:
[Armin Ronacher](https://x.com/mitsuhiko), creator of [flask](https://flask.palletsprojects.com/en/stable/)
recently said this on [the ol'twits](https://x.com/mitsuhiko/status/1882739157120041156):
> The more I build software, the more I despise dependencies. I greatly prefer people copy/pasting stuff into their own
> code bases or re-implement it. Unfortunately the vibe of the time does not embrace that idea much. I need that vibe
> shift.
He also wrote a great blog post about his
[experience with package management](https://lucumr.pocoo.org/2025/1/24/build-it-yourself/) in the Rust ecosystem:
> It's time to have a new perspective: we should give kudos to engineers who write a small function themselves instead
> of hooking in a transitive web of crates. We should be suspicious of big crate graphs. Celebrated are the minimal
> dependencies, the humble function that just quietly does the job, the code that doesn't need to be touched for years
> because it was done right once.
Please go read it in full.
Back in 2021, [Tom Macwright](https://macwright.com) wrote this in
[Vendor by default](https://macwright.com/2021/03/11/vendor-by-default)
> But one thing that I do think is sort of unusual is: Im vendoring a lot of stuff.
>
> Vendoring, in the programming sense, means “copying the source code of another project into your project.” Its in
> contrast to the practice of using dependencies, which would be adding another projects name to your package.json
> file and having npm or yarn download and link it up for you.
I highly recommend reading his take on vendoring as well.
## Software Designed To Be Vendored
Some good news, if you are an open source developer and like the idea of vendoring, is that there is a simple way to
make your software vendor-friendly: remove as many dependencies as you can.
[DaisyUI](https://daisyui.com/), for example, has been in the process of
[removing their dependencies](https://x.com/Saadeghi/status/1882556881253826941), going from 100 dependencies in
version 3 to 0 in version 5.
There is also a set htmx-adjacent projects that are taking vendoring seriously:
* [Surreal](https://github.com/gnat/surreal) - a lightweight jQuery alternative
* [Facet](https://github.com/kgscialdone/facet) - an HTML-oriented Web Component library
* [fixi](https://github.com/bigskysoftware/fixi) - a minimal htmx alternative
None of these JavaScript projects are available in NPM, and all of them [recommend](https://github.com/gnat/surreal#-install)
[vendoring](https://github.com/kgscialdone/facet#installation) the [software](https://github.com/bigskysoftware/fixi#instalation)
into your own project as the primary installation mechanism.
## Vendor First Dependency Managers?
The last thing I want to briefly mention is a technology that combines both vendoring and dependency management:
vendor-first dependency managers. I have never worked with one before, but I have been pointed to
[vend](https://github.com/fosskers/vend), a common lisp vendor oriented package manager (with a great README), as well
as [go's vendoring option](https://go.dev/ref/mod#vendoring).
In writing this essay, I also came across [vendorpull](https://github.com/sourcemeta/vendorpull) and
[git-vendor](https://github.com/brettlangdon/git-vendor), both of which are small but interesting projects.
These all look like excellent tools, and it seems to me that there is an opportunity for some of them (and tools like
them) to add additional functionality to address the traditional weaknesses of vendoring, for example:
* Managing transitive dependencies, if any
* Relatively easy updates of those dependencies
* Managing local modifications made to dependencies (and maybe help manage contributing them upstream?)
With these additional features I wonder if vendor-first dependency managers could compete with "normal" dependency
managers in modern software development, perhaps combining some of the benefits of both approaches.
Regardless, I hope that this essay has helped you think a bit more about dependencies and perhaps planted the idea that
maybe your software could be a little less, well, dependent on dependencies.