<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.2.2">Jekyll</generator><link href="https://staltz.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://staltz.com/" rel="alternate" type="text/html" /><updated>2023-07-20T13:44:12+03:00</updated><id>https://staltz.com/feed.xml</id><title type="html">André Staltz</title><subtitle>Open Source Freelancer</subtitle><author><name>André Staltz</name></author><entry><title type="html">Google shattered human connection</title><link href="https://staltz.com/google-shattered-human-connection.html" rel="alternate" type="text/html" title="Google shattered human connection" /><published>2023-07-20T00:00:00+03:00</published><updated>2023-07-20T00:00:00+03:00</updated><id>https://staltz.com/google-shattered-human-connection</id><content type="html" xml:base="https://staltz.com/google-shattered-human-connection.html"><![CDATA[<p>People who have used the web since the 90s generally miss those times, because they were calmer, they were cooler (in terms of homepage creativity), and less weaponized and polarized. The web has changed a <em>lot</em> in the past two decades, and the biggest advances were the advent of search engines and social media.</p>

<p>One of these in particular, Google’s search engine, is typically criticized for their centralization, hoarding of user metadata, and more recently with a decay in search result quality. In this article I’m not going to talk about any of those. This morning while drinking my coffee, I noticed a critical way in which Google has hurt the web and society in general. In fact, this criticism extends to all search engines, including the one I use, DuckDuckGo. But since Google is still the dominant one, this article is focused on them.</p>

<p>To start with my main point: <strong>Google popularized the habit of taking things out of context</strong>. Google allowed users to cut several steps in their discovery journey, creating a more direct sense of “information at your fingertips”, which was a common mantra at the time (<a href="https://www.youtube.com/watch?v=tWd8DxLfDek">thank you, Bill Gates</a>, <a href="https://www.youtube.com/watch?v=o0O0Xjpjvfc">twice</a>). This is not new information, we’ve always known that Google made information retrieval easier. The problem is that making something easy has a dark side, because whatever was hard to do is not going to be done at all anymore. In specific, Google eliminated the need to connect with communities online if all you wanted was the knowledge produced by that community. And connecting with people and communities – in the style we still practiced in the 90s – is time consuming, often hard.</p>

<p><strong>Sometimes, barriers to entry can be good.</strong> Browsing (excuse me, “surfing”) the web in the 90s was a chaotic discovery process. Sometimes after clicking page after page (each of which took several seconds to render) you would stumble upon something that interests you, and this means that your journey through those pages was your “context”. You arrived at a destination in <em>relation</em> to other places you visited. As an example, as a teenager on the early web, I often browsed through videogame-related pages. On a catalog-like page, I carefully went through each website listed. One day I stumbled upon a game maker forum, and this was an amazing discovery. I created an account, and committed to participating in the community. The discovery process plus the account creation was the barrier. It filtered for people who were willing to pay the “barrier cost”, guaranteeing that the community is all made of people who care enough.</p>

<p>Google eliminated barriers. It was sufficient to have just a fleeting thought about a topic, and Google was ready to send you to the depths of a community of specialists. This had several effects, the most obvious one is the selfish one where the Google user instantly benefited from the insights of that community (if they can parse the expert jargon). Another effect is how reducing these barriers to entry meant that any “rando” could join <em>any</em> community, not having to care for or be interested in the community. But the effect I really want to talk about is how the elimination of barriers and discovery journeys meant that Google users were invited to take things out of context. They call those things “information”.</p>

<p>We now assume it is an established truth that the internet is made of “information” or “content”. This has not always been the case. If you asked a 1990s web user what the internet was, you would probably get divided opinions. Some people would describe it as “information at your fingertips”, but others would say it’s a place to meet and connect with people, either in BBSes, FidoNet, IRC, forums, or surfing people’s homepages. The former is the information paradigm, and the latter is the community paradigm.</p>

<p>I would say that the dominant one today is the information paradigm. People are seen as either information producers or content creators, and “communities” are just places for their content or information to be shared so that reach is maximized. Heck, even now as I write this article I’m hyperaware of what I’m doing – producing information – and how it’s going to be shared: on social media to an ultimately amorphous audience. Genuine community building still exists, because the human need for connection is inexhaustible, but the community paradigm for the internet is lost.</p>

<p>The problem with the information paradigm is how “information” is ripped out of its context: the people, the inherited knowledge, the culture that produced it. Everything is seen as an atomic digestible, and there is little regard for the processes, conversations, debates that produced those digestibles. The whole idea of “information” is somewhat of a farce, I doubt you can truly learn and internalize some information without learning surrounding information that sheds light on the nuances involved. These are things that only knowledgeable people can distinguish and report on. But it is legitimately hard to know who is an expert in what, since – thanks to Google – we all have access to atomic digestibles from any community of experts, and can easily copy-paste them whenever needed in a discussion (with whom? Probably randos on social media).</p>

<p>I won’t paint the 90s as a perfect world (it was not), but there is something we don’t have today which still existed in the 90s prior to the mass adoption of Google. <strong>People asked each other for help</strong>, and whenever they knew something, they would answer. If they didn’t know the answer, they would refer you to someone else who knew better. As a universal example that rings true for anyone older than 30 years old: if you were in a new city and you were lost, you would ask a local stranger for directions. If that stranger didn’t know the answer, they would refer to another stranger who probably knew more. Similarly, if I had a friend who was a doctor (or studying to become one), I would ask health-related questions. Their answers often included disclaimers, pros and cons, and even uncertainty. Similarly also with friends who knew about electronics, or other topics.</p>

<p>The recurring pattern in those examples is <strong>connection</strong> and <strong>commitment</strong>. The local stranger is connected and committed to their environment, they live in it, aware of its contour, remembering its details. The doctor is connected and committed to their healthcare institution, they’re intimately familiar with books on medicine, statistics, chemistry and biology, after having committed years of their life to this knowledge. All of this takes time and effort, and becomes a part of the person, even part of their identity. A person is truly a member of their community or their surroundings when they embody and represent its culture and legacy.</p>

<p>With Google, all of that was shattered to the winds, indexed, optimized, and presented to you in under 100 milliseconds. Connection and commitment are irrelevant and frankly unnecessary when you can just instantly retrieve the directions in a new city with Google Maps, you can discover the most common medication based on your symptoms, and so forth. All without interacting with any single human being. Or at least not directly, because ultimately all of this comes from communities of people. Information is the inhumane essence that is squeezed out of humanity. Even when you’re scanning reviews on Amazon, you’re interacting with the <em>informationesque</em> quantified essence of humans, and only indirectly interacting with actual humans. Curation is the internet’s community paradigm in a servitude relationship with the information paradigm.</p>

<p>What happens when connection is made unnecessary, while humans still infinitely crave for it? Social media. It’s a place where people go to <em>feel</em> connected and understood, but in reality they are just being fed atomic digestibles. These digestibles are tailored to their unique interests, but on a person-to-person level entirely disconnected. Apart from the short-lived chain of replies, there are no conversations. The commonalities in social media “relationships” are shared interests, nothing else. Content is found wherever it is found, and tossed back and forth between these “relationships”. People are primarily interested in their interests, and only secondarily interested in the people who produce those digestible interesting things – content. All while thinking that they are satisfying their need for human connection! This is not obvious at first, but becomes clear as soon as the <del>content creator</del> person changes (gasp) interests or (worse) opinions or (even worse) political alignment. The relationship is immediately made second whenever the shared interest is threatened. There is little conversation, little nuance, little commitment to keep on talking. Facing a vast ocean of people who agree with you, there is little incentive to commit to talking to people who you disagree with.</p>

<p>I know Facebook is typically credited for popularizing social media, but this time I’d like to argue that Google took an active role in creating social media, and I don’t mean <a href="https://en.wikipedia.org/wiki/Google%2B">Google+</a> or its dozens of failed attempts. I mean in how Google advanced the ideology of information at your fingertips. Facebook in the beginning was actually entirely about connection and commitment, when its members were made of only people committed to the same college. Google had an interest in indexing the information on Facebook and Twitter, and all these other social networks popping up. Facebook actually had an interest in locking down that information, creating <em>barriers</em> for search crawlers. Google wanted to eliminate those barriers and open it all up for indexing. Of course, Google wasn’t alone in promoting the ideology of instant information disconnected from communities, but it was arguably the largest and most visible representation of that ideology.</p>

<p>The information paradigm gradually evolved. As Google indexed information on the web and presented it in scrollable lists, it inspired Facebook to index people’s conversations into scrollable lists, represented by what they called the News Feed. Treating people’s conversations as “news” that are “fed” is what made Facebook change from social network to social <em>media</em>. This in turn evolved into the “content creator” ideology, as a way of optimizing for your interests, where you only follow people who say things that you love. It became fast paced with Twitter’s short-form posts. It became high resolution with YouTube. And then it became fast paced <em>and</em> high resolution with TikTok. And here we are today.</p>

<p>As individuals, our need for human connection is still there. As society, our need to listen to nuanced information from experts is still there. As communities, our need to have persistent and shared history with people we do life together with – who can agree or disagree with us – is still there.</p>]]></content><author><name>André Staltz</name></author><category term="blog" /><summary type="html"><![CDATA[People who have used the web since the 90s generally miss those times, because they were calmer, they were cooler (in terms of homepage creativity), and less weaponized and polarized. The web has changed a lot in the past two decades, and the biggest advances were the advent of search engines and social media. One of these in particular, Google’s search engine, is typically criticized for their centralization, hoarding of user metadata, and more recently with a decay in search result quality. In this article I’m not going to talk about any of those. This morning while drinking my coffee, I noticed a critical way in which Google has hurt the web and society in general. In fact, this criticism extends to all search engines, including the one I use, DuckDuckGo. But since Google is still the dominant one, this article is focused on them. To start with my main point: Google popularized the habit of taking things out of context. Google allowed users to cut several steps in their discovery journey, creating a more direct sense of “information at your fingertips”, which was a common mantra at the time (thank you, Bill Gates, twice). This is not new information, we’ve always known that Google made information retrieval easier. The problem is that making something easy has a dark side, because whatever was hard to do is not going to be done at all anymore. In specific, Google eliminated the need to connect with communities online if all you wanted was the knowledge produced by that community. And connecting with people and communities – in the style we still practiced in the 90s – is time consuming, often hard. Sometimes, barriers to entry can be good. Browsing (excuse me, “surfing”) the web in the 90s was a chaotic discovery process. Sometimes after clicking page after page (each of which took several seconds to render) you would stumble upon something that interests you, and this means that your journey through those pages was your “context”. You arrived at a destination in relation to other places you visited. As an example, as a teenager on the early web, I often browsed through videogame-related pages. On a catalog-like page, I carefully went through each website listed. One day I stumbled upon a game maker forum, and this was an amazing discovery. I created an account, and committed to participating in the community. The discovery process plus the account creation was the barrier. It filtered for people who were willing to pay the “barrier cost”, guaranteeing that the community is all made of people who care enough. Google eliminated barriers. It was sufficient to have just a fleeting thought about a topic, and Google was ready to send you to the depths of a community of specialists. This had several effects, the most obvious one is the selfish one where the Google user instantly benefited from the insights of that community (if they can parse the expert jargon). Another effect is how reducing these barriers to entry meant that any “rando” could join any community, not having to care for or be interested in the community. But the effect I really want to talk about is how the elimination of barriers and discovery journeys meant that Google users were invited to take things out of context. They call those things “information”. We now assume it is an established truth that the internet is made of “information” or “content”. This has not always been the case. If you asked a 1990s web user what the internet was, you would probably get divided opinions. Some people would describe it as “information at your fingertips”, but others would say it’s a place to meet and connect with people, either in BBSes, FidoNet, IRC, forums, or surfing people’s homepages. The former is the information paradigm, and the latter is the community paradigm. I would say that the dominant one today is the information paradigm. People are seen as either information producers or content creators, and “communities” are just places for their content or information to be shared so that reach is maximized. Heck, even now as I write this article I’m hyperaware of what I’m doing – producing information – and how it’s going to be shared: on social media to an ultimately amorphous audience. Genuine community building still exists, because the human need for connection is inexhaustible, but the community paradigm for the internet is lost. The problem with the information paradigm is how “information” is ripped out of its context: the people, the inherited knowledge, the culture that produced it. Everything is seen as an atomic digestible, and there is little regard for the processes, conversations, debates that produced those digestibles. The whole idea of “information” is somewhat of a farce, I doubt you can truly learn and internalize some information without learning surrounding information that sheds light on the nuances involved. These are things that only knowledgeable people can distinguish and report on. But it is legitimately hard to know who is an expert in what, since – thanks to Google – we all have access to atomic digestibles from any community of experts, and can easily copy-paste them whenever needed in a discussion (with whom? Probably randos on social media). I won’t paint the 90s as a perfect world (it was not), but there is something we don’t have today which still existed in the 90s prior to the mass adoption of Google. People asked each other for help, and whenever they knew something, they would answer. If they didn’t know the answer, they would refer you to someone else who knew better. As a universal example that rings true for anyone older than 30 years old: if you were in a new city and you were lost, you would ask a local stranger for directions. If that stranger didn’t know the answer, they would refer to another stranger who probably knew more. Similarly, if I had a friend who was a doctor (or studying to become one), I would ask health-related questions. Their answers often included disclaimers, pros and cons, and even uncertainty. Similarly also with friends who knew about electronics, or other topics. The recurring pattern in those examples is connection and commitment. The local stranger is connected and committed to their environment, they live in it, aware of its contour, remembering its details. The doctor is connected and committed to their healthcare institution, they’re intimately familiar with books on medicine, statistics, chemistry and biology, after having committed years of their life to this knowledge. All of this takes time and effort, and becomes a part of the person, even part of their identity. A person is truly a member of their community or their surroundings when they embody and represent its culture and legacy. With Google, all of that was shattered to the winds, indexed, optimized, and presented to you in under 100 milliseconds. Connection and commitment are irrelevant and frankly unnecessary when you can just instantly retrieve the directions in a new city with Google Maps, you can discover the most common medication based on your symptoms, and so forth. All without interacting with any single human being. Or at least not directly, because ultimately all of this comes from communities of people. Information is the inhumane essence that is squeezed out of humanity. Even when you’re scanning reviews on Amazon, you’re interacting with the informationesque quantified essence of humans, and only indirectly interacting with actual humans. Curation is the internet’s community paradigm in a servitude relationship with the information paradigm. What happens when connection is made unnecessary, while humans still infinitely crave for it? Social media. It’s a place where people go to feel connected and understood, but in reality they are just being fed atomic digestibles. These digestibles are tailored to their unique interests, but on a person-to-person level entirely disconnected. Apart from the short-lived chain of replies, there are no conversations. The commonalities in social media “relationships” are shared interests, nothing else. Content is found wherever it is found, and tossed back and forth between these “relationships”. People are primarily interested in their interests, and only secondarily interested in the people who produce those digestible interesting things – content. All while thinking that they are satisfying their need for human connection! This is not obvious at first, but becomes clear as soon as the content creator person changes (gasp) interests or (worse) opinions or (even worse) political alignment. The relationship is immediately made second whenever the shared interest is threatened. There is little conversation, little nuance, little commitment to keep on talking. Facing a vast ocean of people who agree with you, there is little incentive to commit to talking to people who you disagree with. I know Facebook is typically credited for popularizing social media, but this time I’d like to argue that Google took an active role in creating social media, and I don’t mean Google+ or its dozens of failed attempts. I mean in how Google advanced the ideology of information at your fingertips. Facebook in the beginning was actually entirely about connection and commitment, when its members were made of only people committed to the same college. Google had an interest in indexing the information on Facebook and Twitter, and all these other social networks popping up. Facebook actually had an interest in locking down that information, creating barriers for search crawlers. Google wanted to eliminate those barriers and open it all up for indexing. Of course, Google wasn’t alone in promoting the ideology of instant information disconnected from communities, but it was arguably the largest and most visible representation of that ideology. The information paradigm gradually evolved. As Google indexed information on the web and presented it in scrollable lists, it inspired Facebook to index people’s conversations into scrollable lists, represented by what they called the News Feed. Treating people’s conversations as “news” that are “fed” is what made Facebook change from social network to social media. This in turn evolved into the “content creator” ideology, as a way of optimizing for your interests, where you only follow people who say things that you love. It became fast paced with Twitter’s short-form posts. It became high resolution with YouTube. And then it became fast paced and high resolution with TikTok. And here we are today. As individuals, our need for human connection is still there. As society, our need to listen to nuanced information from experts is still there. As communities, our need to have persistent and shared history with people we do life together with – who can agree or disagree with us – is still there.]]></summary></entry><entry><title type="html">Back to the Web</title><link href="https://staltz.com/back-to-the-web.html" rel="alternate" type="text/html" title="Back to the Web" /><published>2023-04-22T00:00:00+03:00</published><updated>2023-04-22T00:00:00+03:00</updated><id>https://staltz.com/back-to-the-web</id><content type="html" xml:base="https://staltz.com/back-to-the-web.html"><![CDATA[<p>I used to blog a lot more. There are a lot of reasons for that, perhaps the biggest one is the standards I hold myself to. Ever since my articles started getting viral and being mentioned in the news, writing on my blog is not a light activity anymore. I want to change that.</p>

<p>We are currently witnessing the most fragmented environment for social networks since the dawn of Twitter and Facebook. The two reasons are: Twitter is in decay, and decentralized alternatives are alive and thriving. This is good and bad. Good because, hey, we’re finally decentralizing this space! Bad because it’s unclear (at least that’s how I’ve been feeling) where to publish your content.</p>

<p>I used to be very active on Twitter, but Twitter is ruined for me now. It’s not about mister Tesla, actually. It’s all the changes that they’re making to Twitter that makes it objectively worse than before. A quick list of complaints: “For You” tab is visually bugged on Android, long form tweets with “read more” button are a huge departure from the “Twitter concept”, “For You” is too algorithmic, and I highly suspect that my tweets get less engagement because I don’t pay for Twitter Blue. And then, there’s a lot of people who left for Mastodon, so Twitter itself isn’t the same place anymore.</p>

<p>About Mastodon, I can’t like it. I’ve tried, and I don’t like it. Ever since I have experienced PHP forum dramas in the 2000s surrounding forum admins’ bad decisions, I have been forever changed. I fundamentally don’t trust any system that has an admin with sudo powers, be it centralized social networks or forums or Fediverse instances. That said, I trust small server admins <em>less</em> than I trust a large company. Someone bored, at midnight, with sudo powers can do a lot of damage. I may have an account, but the more I pour value into it, the less safe I feel about its future in the hands of that admin. I personally haven’t seen Mastodon admin drama yet, but my background makes me not comfortable with the system to begin with.</p>

<p>I have a BlueSky account. But it’s on iOS only, and that’s not my main mobile device, and many times I prefer to use desktop. Time will tell what will happen with BlueSky, but I’m not holding my breath.</p>

<p>I have tried Substack Notes as an alternative. I was excited about the idea, but it has a few technical quirks (I wasn’t able to copy my own text from the mobile app), and the audience there is not the same as the one on Twitter. I’m not sure if I’ll continue using it.</p>

<p>Obviously, I have posted on SSB (<a href="https://manyver.se">Manyverse</a>) quite a lot, and often long-form. But I would like to link folks to those posts, but SSB is a secret garden, by design. And that’s a good thing. Just not for broadcasting ideas to the world.</p>

<p>So, back to the Web. My blog is something I own, I can shape it however I want, I already have an audience for it, and I’m the admin. Everyone is already familiar with the concept of sharing web links, so there’s no learning required from anyone. You know what, the web is still awesome at letting you share long-form documents. That was it’s original purpose, so let’s do this!</p>]]></content><author><name>André Staltz</name></author><category term="blog" /><summary type="html"><![CDATA[I used to blog a lot more. There are a lot of reasons for that, perhaps the biggest one is the standards I hold myself to. Ever since my articles started getting viral and being mentioned in the news, writing on my blog is not a light activity anymore. I want to change that. We are currently witnessing the most fragmented environment for social networks since the dawn of Twitter and Facebook. The two reasons are: Twitter is in decay, and decentralized alternatives are alive and thriving. This is good and bad. Good because, hey, we’re finally decentralizing this space! Bad because it’s unclear (at least that’s how I’ve been feeling) where to publish your content. I used to be very active on Twitter, but Twitter is ruined for me now. It’s not about mister Tesla, actually. It’s all the changes that they’re making to Twitter that makes it objectively worse than before. A quick list of complaints: “For You” tab is visually bugged on Android, long form tweets with “read more” button are a huge departure from the “Twitter concept”, “For You” is too algorithmic, and I highly suspect that my tweets get less engagement because I don’t pay for Twitter Blue. And then, there’s a lot of people who left for Mastodon, so Twitter itself isn’t the same place anymore. About Mastodon, I can’t like it. I’ve tried, and I don’t like it. Ever since I have experienced PHP forum dramas in the 2000s surrounding forum admins’ bad decisions, I have been forever changed. I fundamentally don’t trust any system that has an admin with sudo powers, be it centralized social networks or forums or Fediverse instances. That said, I trust small server admins less than I trust a large company. Someone bored, at midnight, with sudo powers can do a lot of damage. I may have an account, but the more I pour value into it, the less safe I feel about its future in the hands of that admin. I personally haven’t seen Mastodon admin drama yet, but my background makes me not comfortable with the system to begin with. I have a BlueSky account. But it’s on iOS only, and that’s not my main mobile device, and many times I prefer to use desktop. Time will tell what will happen with BlueSky, but I’m not holding my breath. I have tried Substack Notes as an alternative. I was excited about the idea, but it has a few technical quirks (I wasn’t able to copy my own text from the mobile app), and the audience there is not the same as the one on Twitter. I’m not sure if I’ll continue using it. Obviously, I have posted on SSB (Manyverse) quite a lot, and often long-form. But I would like to link folks to those posts, but SSB is a secret garden, by design. And that’s a good thing. Just not for broadcasting ideas to the world. So, back to the Web. My blog is something I own, I can shape it however I want, I already have an audience for it, and I’m the admin. Everyone is already familiar with the concept of sharing web links, so there’s no learning required from anyone. You know what, the web is still awesome at letting you share long-form documents. That was it’s original purpose, so let’s do this!]]></summary></entry><entry><title type="html">Parametric Progress</title><link href="https://staltz.com/parametric-progress.html" rel="alternate" type="text/html" title="Parametric Progress" /><published>2023-04-22T00:00:00+03:00</published><updated>2023-04-22T00:00:00+03:00</updated><id>https://staltz.com/parametric-progress</id><content type="html" xml:base="https://staltz.com/parametric-progress.html"><![CDATA[<p>It’s been some 10 years since I started my career as a developer, and one of the most important habits I have learned is something I call “Parametric Progress”. I would have told younger me about this, if I could.</p>

<p>When you’re working on changing a system (and codebases are systems), it is extremely tempting to change more aspects than you’re originally planned to. Say you wanted to just fix a bug. You found the culprit, but you also saw a badly named variable, a function that could be split into two, some code style changes to make, some libraries to be updated, and something else that <em>seemed</em> like a bug. So you fixed all of those things, and you committed it.</p>

<p>This may seem efficient, because you are doing more work in one go, but it’s not. It’s actually the opposite. It is more efficient to fix only one thing per git commit. Choose one aspect, or one “parameter”, and change only that. Then, see what happens, learn about the effects of your change, and then move on to the next parameter. Thus “Parametric Progress”.</p>

<p>The diagram below illustrates how this may seem counter-intuitive. If you change many things at once (left side in the diagram), it may seem like you are taking shortcuts, and thus arriving at the goal faster. And Parametric Progress (right side) may seem like making detours.</p>

<p><a href="/img/parametric-progress.jpg"><img src="/img/parametric-progress.jpg" alt="Parametric Progress" /></a></p>

<p>The obvious reason why you should avoid changing many things at once is lack of focus. Don’t go yak shaving, don’t allow feature creep. Just do the thing you were meant to do, and nothing else. But I have another reason, which is more subtle.</p>

<p>Systems are sensitive things. Once you change one aspect, there is always a chance that other parts of the system will be affected by your change in ways that surprise you. So it’s best to change only one aspect, and then learn how the system reacts to that change. If you notice a new bug arise, you can be sure that it was caused by that specific change, and you can more easily learn the cause and effect relationship. However, if you change many aspects at once and a new bug arises, you don’t know which of the changes caused it. It is often quite possible that even a code style change causes a bug.</p>

<p>Programmers sometimes talk about spooky black magic whenever they are completely baffled by a bug. The feeling of black magic happens when your mental model of the system is not in sync with reality of the system. And changing many things at once does not easily allow you to learn about the system. Developers talk too much about writing code. Learning code is far more important. Be in sync with your system, and you’ll fix bugs easier, not to mention preventing bugs in the first place. Use something such as parametric progress to improve your learning of the system.</p>]]></content><author><name>André Staltz</name></author><category term="blog" /><summary type="html"><![CDATA[It’s been some 10 years since I started my career as a developer, and one of the most important habits I have learned is something I call “Parametric Progress”. I would have told younger me about this, if I could. When you’re working on changing a system (and codebases are systems), it is extremely tempting to change more aspects than you’re originally planned to. Say you wanted to just fix a bug. You found the culprit, but you also saw a badly named variable, a function that could be split into two, some code style changes to make, some libraries to be updated, and something else that seemed like a bug. So you fixed all of those things, and you committed it. This may seem efficient, because you are doing more work in one go, but it’s not. It’s actually the opposite. It is more efficient to fix only one thing per git commit. Choose one aspect, or one “parameter”, and change only that. Then, see what happens, learn about the effects of your change, and then move on to the next parameter. Thus “Parametric Progress”. The diagram below illustrates how this may seem counter-intuitive. If you change many things at once (left side in the diagram), it may seem like you are taking shortcuts, and thus arriving at the goal faster. And Parametric Progress (right side) may seem like making detours. The obvious reason why you should avoid changing many things at once is lack of focus. Don’t go yak shaving, don’t allow feature creep. Just do the thing you were meant to do, and nothing else. But I have another reason, which is more subtle. Systems are sensitive things. Once you change one aspect, there is always a chance that other parts of the system will be affected by your change in ways that surprise you. So it’s best to change only one aspect, and then learn how the system reacts to that change. If you notice a new bug arise, you can be sure that it was caused by that specific change, and you can more easily learn the cause and effect relationship. However, if you change many aspects at once and a new bug arises, you don’t know which of the changes caused it. It is often quite possible that even a code style change causes a bug. Programmers sometimes talk about spooky black magic whenever they are completely baffled by a bug. The feeling of black magic happens when your mental model of the system is not in sync with reality of the system. And changing many things at once does not easily allow you to learn about the system. Developers talk too much about writing code. Learning code is far more important. Be in sync with your system, and you’ll fix bugs easier, not to mention preventing bugs in the first place. Use something such as parametric progress to improve your learning of the system.]]></summary></entry><entry><title type="html">Time Till Open Source Alternative</title><link href="https://staltz.com/time-till-open-source-alternative.html" rel="alternate" type="text/html" title="Time Till Open Source Alternative" /><published>2022-08-27T00:00:00+03:00</published><updated>2022-08-27T00:00:00+03:00</updated><id>https://staltz.com/time-till-open-source-alternative</id><content type="html" xml:base="https://staltz.com/time-till-open-source-alternative.html"><![CDATA[<p>Open source is coming for your business. It is just a matter of time before there exists a compelling open source alternative to your software. It won’t happen overnight, it will start out as a poor alternative, but slowly growing to become the robust and cheap (in fact, free!) solution that everyone uses.</p>

<p>In this blog post, I’ll prove this to you with data. I present a measurement I call “Time Till Open Source Alternative” (TTOSA) which represents how long a proprietary software lasted without a direct open source alternative.</p>

<p>The average TTOSA for the cases I measured is 7 years, and that seems like <em>plenty</em> of time for a business to be profitable with proprietary software, especially given that once the open source alternative hits the market, it still takes <em>years</em> until it can outright displace the proprietary champion. Doesn’t seem like there’s a problem here. However, TTOSA is speeding up, it’s becoming easier and easier to build open source alternatives, and lately we’ve been seeing a <strong>lot</strong> of them pop up on GitHub. The current world record for quickest TTOSA is 244 days, held by <a href="https://github.com/foambubble/foam/">Foam</a>, an alternative to <a href="https://roamresearch.com/">Roam Research</a>. This is a trend, and it means a lot of things for the future of software and the related businesses.</p>

<h2 id="the-data">The data</h2>

<p>I’ve been occasionally collecting data for this manually, and it’s large enough now to publish. The following table lists proprietary software and corresponding open source alternatives that were <strong>directly</strong> inspired by the proprietary software. For both, I tried to determine the <strong>birth date</strong> of the software, which was often either the date the company was founded, or the date of the initial commit. My source for these dates has been Crunchbase, Wikipedia, and git logs on GitHub.</p>

<p><em>Time Till Open Source Alternative</em> is then defined as the difference between those dates, assuming that the proprietary software always comes first. I identified 39 cases, see the table below, or <a href="https://github.com/staltz/ttosa">get the raw CSV here</a>.</p>

<p><a href="/img/ttosa-table.png"><img src="/img/ttosa-table.png" alt="Time Till Open Source Alternative, table" /></a></p>

<p>Plotting this data on a chart, where the X-axis is the the date the company was founded, and the Y-axis is the associated TTOSA, we get the following:</p>

<p><a href="/img/ttosa-chart.png"><img src="/img/ttosa-chart.png" alt="Time Till Open Source Alternative, chart" /></a></p>

<p>Two things become clear with this chart: first, there is an explosion of dots after the mid 2000s, and this probably is correlated with the rise of the Web and subsequently the GitHub era (GitHub was founded in 2008). Second, the trend is downward.</p>

<h2 id="measurement-method">Measurement method</h2>

<p>The keen reader may notice that 39 open source alternatives doesn’t seem like a lot. Some popular open source projects may come immediately to mind and not be on that list. For instance, the Apache HTTP Server is not on the list.</p>

<p>That’s because I’m measuring <strong>direct alternatives</strong> to proprietary software. Apache was not <em>created as a</em> direct replacement for some known proprietary software, even if in practice it could replace proprietary software at the time. Contrast that with GNU/Linux, for instance, where the name “GNU’s Not Unix” makes a direct reference to Unix. Open source alternatives like that will often have the name of their proprietary counterpart mentioned in the README. Those are the types of open source projects I’m covering with this list. If I missed any significant case, please feel free to <a href="https://github.com/staltz/ttosa/pulls">open a pull request to update the table</a>.</p>

<p>Another important metric is the choice of “birth date” for a project. The day a company is founded, their product barely even exists, and probably the market has no idea about it. So it may seem silly that we’re measuring “time till open source” starting from that date, because the proprietary software has not yet acquired awareness in the market. It may also seem wrong to pick the open source alternative’s <em>initial commit</em> as the day we now have a viable alternative to the proprietary software. At that point, we certainly don’t!</p>

<p>However, we don’t have a good way of determining the day a product has become a viable solution in the market. When exactly did Sublime Text become popular? It’s hard to get an exact date. When <em>exactly</em> did Atom and VSCode rise as popular alternatives to Sublime Text? We don’t know.</p>

<p>So we need an exact date, and the birth date for a project is the best we can get. We then expect that projects like Sublime Text and Atom take the same amount of time to grow from “day one” until “popular”. That’s why we use the difference between birth dates, because it probably approximates <em>difference between market popularity dates</em> well enough.</p>

<h2 id="a-couple-caveats">A couple caveats</h2>

<p>Let’s be a bit skeptic about this data for a moment, we can learn a few truths from the details. This list of open source projects has a mix of complex projects, simple projects, popular projects, and just-5k-GitHub-stars projects.</p>

<p>For instance, take BitKeeper (proprietary) versus Git (open source). Anyone who is a developer today knows what Git is, while BitKeeper is just a small anecdote in Git’s history. Contrast that with Apple Siri – known by everyone with an iPhone – versus SEPIA Framework, which has… 70 stars on GitHub.</p>

<p>It is clear that these open source projects are at various stages of maturity and industry leadership, and it’s a long shot to say that SEPIA Framework will disrupt Siri. Just because there exists an open source alternative to something, doesn’t mean that this alternative is yet of high quality. There is often a long journey for these projects before they are ready for mainstream. That’s a whole another aspect to measure.</p>

<p>That said, TTOSA is still a powerful measurement because it tells us it doesn’t take long until you have <em>some kind of barely usable</em> alternative to a proprietary software. If we would measure “Time Till <em>High Quality</em> Open Source Alternative”, we would figure out that… duh… it takes a lot more time. But, maybe we would <em>also</em> find a downward trend in that dataset. And that’s a powerful trend. High quality open source should send a chill down the spine of business dudes, and they already exist: Linux, VLC, Firefox, Git, OBS.</p>

<p>The projects in this list also vary in complexity. It’s much easier to build an open source alternative to a text editor like Workflowy or Roam, than it is to build an open source alternative to YouTube.</p>

<p>Another story to be told is that a lot of these open source alternatives seem to be <em>built by companies</em>, not by open source hackers in their free time, and those companies want to make money. Examples: Excalidraw, GitLab, Bitwarden. The freemium open source business model is basically a way of preventing your company from being disrupted by third-party open source alternatives, because you control the open source to begin with, and you benefit by the community and contributors. This means one thing, though: you admit you don’t make money from software, you make money from something else, be that cloud hosting, support, corporate-specific features, or something else.</p>

<p>Finally, a blind spot in this dataset is the green triangle in the chart below:</p>

<p><a href="/img/ttosa-chart-today-future.png"><img src="/img/ttosa-chart-today-future.png" alt="Time Till Open Source Alternative, chart showing today and the future" /></a></p>

<p>It means that maybe soon in the year 2023 there will be a lot of open source alternatives discovered that have a TTOSA number as high as (say) 5000, thus landing the green area above. This would mean our “downward trend” is incorrect, because we haven’t waited long enough to see the full picture.</p>

<p>That is a theoretical possibility, and the green triangle will <em>always</em> exist in this chart, no matter how far in the future we go. A counterpoint to this blind spot is that the world record for <em>lowest TTOSA</em> is being broken, decade after decade. In the 80s it was 2192 days, in the 90s it was 1725 days, in the 2000s it was 1094, in the 2010s it was 244.</p>

<h2 id="the-endgame">The endgame?</h2>

<p>All software will be open source, and no one will make money with software.</p>

<p>That’s a pretty tough claim to accept, so let’s digest it in parts. Is all software becoming open source? You will always be able to write software and keep it secret, so that’s already a refutation. Not all software will necessary be open source. That’s not my point.</p>

<p>My point is that all software <em>in the market</em> will be open source, and it’s caused by two trends. (1) Software is becoming easier to create and easier to share its source code. The rise of high-level and/or interpreted languages made the creation of software easy, and something you can do in a few weekends if you want to. The rise of GitHub means you can upload your project with three words and 10 seconds: <code class="language-plaintext highlighter-rouge">gh repo create</code>. And this is a feedback loop: libraries made open source end up being used to increase productivity of more and more programmers, who in turn publish more open source code.</p>

<p>(2) Closed software dies when resources run out, but open source software only dies when public interest runs out. There are several examples I could mention of brilliant proprietary software that existed for a brief duration of time, simply because the startup that birthed it went bankrupt, or the tech giant discontinued the product because of resource allocation. I believe you can come up with your own recollections of these.</p>

<p>On the other hand, open source is born public, and receives care from fellow contributors <em>in proportion</em> to the amount of attention and interest the project gets. I’m not saying that open source projects never die, some definitely do. I’m saying that <em>popular</em> open source projects never die, because once they are popular enough, there is a sufficient amount of contributors to keep it alive. As a vivid example of this, my friend and ex-coworker Jani Eväkallio built <a href="https://github.com/foambubble/foam/">Foam</a> in 2 months and then unfortunately burned out. He never touched Foam ever again. However, by that time, the project gathered enough popular interest, and there are regular contributors who keep the project alive and relevant, updating it every month, for the past 2 years.</p>

<p>Over time, this means that the open source ecosystem has a unique leverage over the startup ecosystem: startups have runways which prevent the <em>indefinite growth</em> of their product, unless they hit the right combination of luck and customer satisfaction. But popular open source is unconstrained, it only gets better over time. And we’ve come a long way. Blender used to be cringe for 3D modelling, nowadays Blender is punch-in-the-face awesome, and it will get even better.</p>

<p>Finally, to address the “no one will make money with software” claim. Open source software hardly makes any money, and will make <em>even less money</em> in the future. I explored this point in depth in a previous blog post titled <a href="https://staltz.com/software-below-the-poverty-line.html">Software below the poverty line</a>. The effect this has on the market is that it reduces the price point of software, no matter if it’s open or closed. If your closed software demands $1000 from my pocket but I can make do with a free and open source alternative, I will choose the open one.</p>

<p>At a macroscopic scale, this forces closed software companies to reduce their pricing to better match what you can get in the market. Many of these companies are now providing their software at price zero, and they monetize in other ways. B2B companies (like GitLab) just make their software open source and quit trying to compete in the software-for-sale market, monetizing on support, hosting, and other means instead. B2C companies like social media platforms monetize on attention, via ads. They <em>could</em> open source their software, but it makes little sense, because it’s so closely tied to their data center infrastructure. The point to be made here is that those platforms monetize their databases, <em>not</em> their software. In fact, their software is largely oriented towards taking good care of that valuable, humongous and sensitive database. Software itself really doesn’t have a future in making money.</p>

<p>In the future – and maybe it’ll take a couple more decades – all software will be open source, and no one will make money with software. And I think that’s a good thing.</p>]]></content><author><name>André Staltz</name></author><category term="blog" /><summary type="html"><![CDATA[Open source is coming for your business. It is just a matter of time before there exists a compelling open source alternative to your software. It won’t happen overnight, it will start out as a poor alternative, but slowly growing to become the robust and cheap (in fact, free!) solution that everyone uses. In this blog post, I’ll prove this to you with data. I present a measurement I call “Time Till Open Source Alternative” (TTOSA) which represents how long a proprietary software lasted without a direct open source alternative. The average TTOSA for the cases I measured is 7 years, and that seems like plenty of time for a business to be profitable with proprietary software, especially given that once the open source alternative hits the market, it still takes years until it can outright displace the proprietary champion. Doesn’t seem like there’s a problem here. However, TTOSA is speeding up, it’s becoming easier and easier to build open source alternatives, and lately we’ve been seeing a lot of them pop up on GitHub. The current world record for quickest TTOSA is 244 days, held by Foam, an alternative to Roam Research. This is a trend, and it means a lot of things for the future of software and the related businesses. The data I’ve been occasionally collecting data for this manually, and it’s large enough now to publish. The following table lists proprietary software and corresponding open source alternatives that were directly inspired by the proprietary software. For both, I tried to determine the birth date of the software, which was often either the date the company was founded, or the date of the initial commit. My source for these dates has been Crunchbase, Wikipedia, and git logs on GitHub. Time Till Open Source Alternative is then defined as the difference between those dates, assuming that the proprietary software always comes first. I identified 39 cases, see the table below, or get the raw CSV here. Plotting this data on a chart, where the X-axis is the the date the company was founded, and the Y-axis is the associated TTOSA, we get the following: Two things become clear with this chart: first, there is an explosion of dots after the mid 2000s, and this probably is correlated with the rise of the Web and subsequently the GitHub era (GitHub was founded in 2008). Second, the trend is downward. Measurement method The keen reader may notice that 39 open source alternatives doesn’t seem like a lot. Some popular open source projects may come immediately to mind and not be on that list. For instance, the Apache HTTP Server is not on the list. That’s because I’m measuring direct alternatives to proprietary software. Apache was not created as a direct replacement for some known proprietary software, even if in practice it could replace proprietary software at the time. Contrast that with GNU/Linux, for instance, where the name “GNU’s Not Unix” makes a direct reference to Unix. Open source alternatives like that will often have the name of their proprietary counterpart mentioned in the README. Those are the types of open source projects I’m covering with this list. If I missed any significant case, please feel free to open a pull request to update the table. Another important metric is the choice of “birth date” for a project. The day a company is founded, their product barely even exists, and probably the market has no idea about it. So it may seem silly that we’re measuring “time till open source” starting from that date, because the proprietary software has not yet acquired awareness in the market. It may also seem wrong to pick the open source alternative’s initial commit as the day we now have a viable alternative to the proprietary software. At that point, we certainly don’t! However, we don’t have a good way of determining the day a product has become a viable solution in the market. When exactly did Sublime Text become popular? It’s hard to get an exact date. When exactly did Atom and VSCode rise as popular alternatives to Sublime Text? We don’t know. So we need an exact date, and the birth date for a project is the best we can get. We then expect that projects like Sublime Text and Atom take the same amount of time to grow from “day one” until “popular”. That’s why we use the difference between birth dates, because it probably approximates difference between market popularity dates well enough. A couple caveats Let’s be a bit skeptic about this data for a moment, we can learn a few truths from the details. This list of open source projects has a mix of complex projects, simple projects, popular projects, and just-5k-GitHub-stars projects. For instance, take BitKeeper (proprietary) versus Git (open source). Anyone who is a developer today knows what Git is, while BitKeeper is just a small anecdote in Git’s history. Contrast that with Apple Siri – known by everyone with an iPhone – versus SEPIA Framework, which has… 70 stars on GitHub. It is clear that these open source projects are at various stages of maturity and industry leadership, and it’s a long shot to say that SEPIA Framework will disrupt Siri. Just because there exists an open source alternative to something, doesn’t mean that this alternative is yet of high quality. There is often a long journey for these projects before they are ready for mainstream. That’s a whole another aspect to measure. That said, TTOSA is still a powerful measurement because it tells us it doesn’t take long until you have some kind of barely usable alternative to a proprietary software. If we would measure “Time Till High Quality Open Source Alternative”, we would figure out that… duh… it takes a lot more time. But, maybe we would also find a downward trend in that dataset. And that’s a powerful trend. High quality open source should send a chill down the spine of business dudes, and they already exist: Linux, VLC, Firefox, Git, OBS. The projects in this list also vary in complexity. It’s much easier to build an open source alternative to a text editor like Workflowy or Roam, than it is to build an open source alternative to YouTube. Another story to be told is that a lot of these open source alternatives seem to be built by companies, not by open source hackers in their free time, and those companies want to make money. Examples: Excalidraw, GitLab, Bitwarden. The freemium open source business model is basically a way of preventing your company from being disrupted by third-party open source alternatives, because you control the open source to begin with, and you benefit by the community and contributors. This means one thing, though: you admit you don’t make money from software, you make money from something else, be that cloud hosting, support, corporate-specific features, or something else. Finally, a blind spot in this dataset is the green triangle in the chart below: It means that maybe soon in the year 2023 there will be a lot of open source alternatives discovered that have a TTOSA number as high as (say) 5000, thus landing the green area above. This would mean our “downward trend” is incorrect, because we haven’t waited long enough to see the full picture. That is a theoretical possibility, and the green triangle will always exist in this chart, no matter how far in the future we go. A counterpoint to this blind spot is that the world record for lowest TTOSA is being broken, decade after decade. In the 80s it was 2192 days, in the 90s it was 1725 days, in the 2000s it was 1094, in the 2010s it was 244. The endgame? All software will be open source, and no one will make money with software. That’s a pretty tough claim to accept, so let’s digest it in parts. Is all software becoming open source? You will always be able to write software and keep it secret, so that’s already a refutation. Not all software will necessary be open source. That’s not my point. My point is that all software in the market will be open source, and it’s caused by two trends. (1) Software is becoming easier to create and easier to share its source code. The rise of high-level and/or interpreted languages made the creation of software easy, and something you can do in a few weekends if you want to. The rise of GitHub means you can upload your project with three words and 10 seconds: gh repo create. And this is a feedback loop: libraries made open source end up being used to increase productivity of more and more programmers, who in turn publish more open source code. (2) Closed software dies when resources run out, but open source software only dies when public interest runs out. There are several examples I could mention of brilliant proprietary software that existed for a brief duration of time, simply because the startup that birthed it went bankrupt, or the tech giant discontinued the product because of resource allocation. I believe you can come up with your own recollections of these. On the other hand, open source is born public, and receives care from fellow contributors in proportion to the amount of attention and interest the project gets. I’m not saying that open source projects never die, some definitely do. I’m saying that popular open source projects never die, because once they are popular enough, there is a sufficient amount of contributors to keep it alive. As a vivid example of this, my friend and ex-coworker Jani Eväkallio built Foam in 2 months and then unfortunately burned out. He never touched Foam ever again. However, by that time, the project gathered enough popular interest, and there are regular contributors who keep the project alive and relevant, updating it every month, for the past 2 years. Over time, this means that the open source ecosystem has a unique leverage over the startup ecosystem: startups have runways which prevent the indefinite growth of their product, unless they hit the right combination of luck and customer satisfaction. But popular open source is unconstrained, it only gets better over time. And we’ve come a long way. Blender used to be cringe for 3D modelling, nowadays Blender is punch-in-the-face awesome, and it will get even better. Finally, to address the “no one will make money with software” claim. Open source software hardly makes any money, and will make even less money in the future. I explored this point in depth in a previous blog post titled Software below the poverty line. The effect this has on the market is that it reduces the price point of software, no matter if it’s open or closed. If your closed software demands $1000 from my pocket but I can make do with a free and open source alternative, I will choose the open one. At a macroscopic scale, this forces closed software companies to reduce their pricing to better match what you can get in the market. Many of these companies are now providing their software at price zero, and they monetize in other ways. B2B companies (like GitLab) just make their software open source and quit trying to compete in the software-for-sale market, monetizing on support, hosting, and other means instead. B2C companies like social media platforms monetize on attention, via ads. They could open source their software, but it makes little sense, because it’s so closely tied to their data center infrastructure. The point to be made here is that those platforms monetize their databases, not their software. In fact, their software is largely oriented towards taking good care of that valuable, humongous and sensitive database. Software itself really doesn’t have a future in making money. In the future – and maybe it’ll take a couple more decades – all software will be open source, and no one will make money with software. And I think that’s a good thing.]]></summary></entry><entry><title type="html">The Myth of Mass Collaboration</title><link href="https://staltz.com/the-myth-of-mass-collaboration.html" rel="alternate" type="text/html" title="The Myth of Mass Collaboration" /><published>2022-08-21T00:00:00+03:00</published><updated>2022-08-21T00:00:00+03:00</updated><id>https://staltz.com/the-myth-of-mass-collaboration</id><content type="html" xml:base="https://staltz.com/the-myth-of-mass-collaboration.html"><![CDATA[<p>There is a general belief that the internet has supercharged collective intelligence and allowed humans to collaborate at scale, producing knowledge and creating masterpieces. On the surface, it <em>seems</em> true. To name a few archetypal examples: Wikipedia, open source projects such as Linux, hacktivism, and crowdsourced science experiments such as Rosetta@home. However, those successes did <em>not</em> happen as coordination, planning, and execution at a global scale. There is little collaboration taking place.</p>

<p>I used to believe there was mass collaboration on the internet. But I’ve realized that collaboration is extremely hard. It does not scale, especially not at internet scale. The examples we look up to are either not collaborative on the microscopic level, or are rare exceptions to the rule.</p>

<h2 id="goals-and-teamwork">Goals and teamwork</h2>

<p>To collaborate on something is to work together with others towards a common goal. How often do people have a common goal? RARELY. Put three co-founders together to build a company, and each of them has slightly different goals for what they want to achieve with it. Start a band, and each musician will want to play slightly (or vastly) different genres. This is assuming a tiny group of people. Put a million people together and how many goals will there be? We have difficulty even with <em>communication</em> at that scale, how could we even begin to create a consensus on <strong>one</strong> goal?</p>

<p>Apart from setting a common goal, collaboration requires good teamwork. I’m fascinated by teamwork. When it works, it’s beautiful, it’s productive, it’s art. I’m sure there are lots of teamwork coaches who specialize in consulting teams to improve their dynamic at companies, but I’m skeptic they have understood how to consistently create it anywhere. Teamwork either works, or it doesn’t. And even when it works, it could all fall apart when you change a tiny thing, like add a member or change the environment.</p>

<p>I have had great teamwork with fellow programmers, and bad teamwork. I’ve had great fellow musicians in a band, and difficult musical relationships. But the example I want to write this time is in gaming.</p>

<p>I play Apex Legends weekly, and it’s a visceral dynamic of a team of 3 players coordinating at a fast pace, handling many variables at once, as a good e-sport should be. Unlike other battle royale shooter games, teamwork in Apex is vital. You can be good at shooting, but if you’re bad at collaborating, you’re not going to survive in Apex. You can be bad at shooting, but if you’re good at collaborating, you and your team usually end up okay.</p>

<p>Typically, these teams are formed… randomly. With strangers online. Sometimes you can audio chat with them too (it’s usually toxic complaints). If you’re a believer in mass collaboration, you would bet that soon enough you would stumble upon strangers that form a good team with you. Nope, you don’t. For two reasons:</p>

<p>(1) After playing this for 3 years, I’ve learned that you have to <em>learn about your team mates styles</em>, and this can only happen if you consistently play with the same folks. So playing with random people makes that impossible. (2) If you do stumble upon a stranger who is a great player, they probably… don’t think the same about you. The large <em>supply</em> of players on the internet means that it’s better to move on and try new team players, than to keep playing with someone with worse skills than you. This means that everyone who plays Apex with strangers is just constantly on the outlook of great team mates, and those great players don’t want to commit to anyone who is worse than them. You’re left with 3 options: either you keep on playing with mediocre players, and rarely win; or you climb up to the top 100 players, become a pro and develop friendships with the pros; or you play only with friends who want to commit to teamwork with you.</p>

<p>The latter has been my case. I’ve been playing exclusively with friends, and the dynamic has been similar to that of a team of programmers or musicians. And it’s still not easy, we have a hard time agreeing on our goals, and in Apex the goals and next steps change all the time. Every 5 seconds there is a different goal, and yes we disagree on what those goals should be.</p>

<p>Mass collaboration does not happen in Apex, because average players don’t have incentive to learn teamwork with other average players, and because the pros are such a small group that we can’t even call that “mass” anymore.</p>

<h2 id="prolific-creators">Prolific creators</h2>

<p>Similar effects happen elsewhere on the internet. It’s been more than 5 years now that I work full-time on open source projects. Some people seem to believe that there is a “community of open source coders” collaborating on building software. That is far from the truth.</p>

<p>Open source is built by exceptional individuals, and <em>tweaked</em> by everyone else. A small group of prolific programmers do the hard job of building 80% of the code, and a crowd of other programmers take care of the 20%, comprising usually highly specific bug fixes, documentation improvements, issue reporting and outreach.</p>

<p>This 80-20 rule is also known as the Pareto principle and it permeates the internet. Another similar principle is the 1%-9%-90% rule, which says that in internet communities, 1% of users are active creators, 9% are occasional contributors, and 90% are lurkers. The exact numbers vary from case to case, but it stands true that lurkers are approximately an order of magnitude more than occasional contributors, who in turn are an order of magnitude more than active creators.</p>

<p><a href="/some-people-want-to-run-their-own-servers.html">I’ve written about this before</a> and gave some examples from Wikipedia, YouTube, Mastodon, and Tor. It’s one of the defining aspects of the internet. A Wikipedia article is usually kickstarted by one person, not as a team collaboration. After publication, an army of occasional nitpickers (the contributors) enjoy finding and correcting small mistakes in the articles. The consumers of the article are a much larger group than the army of contributors.</p>

<p>As a general rule, the role of active creators or volunteers is to create content or to support the existence of the system. The role of contributors is corrective and supportive, either they are suggest small fixes or they are boosting the active creators with retweets and upvotes. And the lurkers are basically invisible, you don’t hear much from there.</p>

<h2 id="interaction">Interaction</h2>

<p>The role of interaction or collaboration is not central to this dynamic. Interaction <em>between</em> prolific creators is somewhat common, but not an indispensable aspect of content creation and propagation on the internet. Take out partnerships between creators, and there’s still a lot of creators producing content independently. But if you would take out all of these prolific creators and leave content creation on the hands of millions of occasional contributors and lurkers, then you end up with an internet that is vastly smaller in quantity AND worse in quality. That is how much the internet leans on prolific creators.</p>

<p>Don’t get me wrong, though, interaction is extremely common, in fact it’s the majority of activity on the internet. But it’s usually not collaboration, instead it’s conversation, requests, debates, disagreements, and trolling.</p>

<p>Could it be said that the open source community is <em>collaborating</em> towards the common goal of providing quality software for everyone to use? Hardly. Programmers publishing open source projects have vastly differing objectives. Some want to show off their hobby projects. Some want attention and marketing. Others don’t have a clear goal, they just put projects on GitHub. “Quality software for all” ends up being an incidental result. Could it be said that the ecosystem of open source tools depend on each other, build on the successes of each other, and thus are collaborating? Maybe. But they do so as consumers and producers of each other’s content, <em>not</em> as co-creators. It’s market dynamics of supply and demand, not teamwork dynamics.</p>

<p>Could it be said that reviewers on Trip Advisor and Amazon are collaborating towards curating the best hotels, restaurants, and products? Maybe. But they’re doing so by interacting with the <em>rules of the system</em>: they feed the <em>algorithm</em> with structured inputs, and it’s the algorithm that coordinates the curation of the best services and products. Unassisted by algorithms, there is little to no human-to-human coordination involved.</p>

<p>Could it be said that the army of nitpickers is truly <em>collaborating</em> towards the common goal of factually correcting content on the internet? Maybe, yeah. But there is very little interaction with others necessary for an individual to spot a mistake and correct it. The work of factually correcting articles can be parallelized at scale. That said, <em>nitpicking at a global scale</em> is not the utopic vision that we think of when we talk about “mass collaboration on the internet”.</p>

<h2 id="mass-visibility">Mass visibility</h2>

<p>What the internet has actually provided is scale and mass visibility to creators. When you bring everyone online, it’s a lot of people, it’s billions. And even though prolific people are rare, say 1 in every 1000 persons, then if there are 5 billion people online, that means 5 million prolific creators, which is a LOT. So much that it sustains all the 5 billion with a lot of interesting content, every day.</p>

<p>Before the internet, prolific people didn’t have global reach, and were limited to their local communities. While the internet has <em>allowed</em> more collaboration to take place, the internet has not <em>caused</em> collaboration. It takes teamwork, shared goals, and relationships that work. Finding like-minded people for collaboration is one thing the internet helped us, but it takes So. Much. More. Than. That. We’re still pretty bad at agreeing on goals, learning how other people like to work, and adapting to that in a productive manner. And the internet is not going to change that. Not at small scale, and <strong>especially</strong> not at mass scale.</p>]]></content><author><name>André Staltz</name></author><category term="blog" /><summary type="html"><![CDATA[There is a general belief that the internet has supercharged collective intelligence and allowed humans to collaborate at scale, producing knowledge and creating masterpieces. On the surface, it seems true. To name a few archetypal examples: Wikipedia, open source projects such as Linux, hacktivism, and crowdsourced science experiments such as Rosetta@home. However, those successes did not happen as coordination, planning, and execution at a global scale. There is little collaboration taking place. I used to believe there was mass collaboration on the internet. But I’ve realized that collaboration is extremely hard. It does not scale, especially not at internet scale. The examples we look up to are either not collaborative on the microscopic level, or are rare exceptions to the rule. Goals and teamwork To collaborate on something is to work together with others towards a common goal. How often do people have a common goal? RARELY. Put three co-founders together to build a company, and each of them has slightly different goals for what they want to achieve with it. Start a band, and each musician will want to play slightly (or vastly) different genres. This is assuming a tiny group of people. Put a million people together and how many goals will there be? We have difficulty even with communication at that scale, how could we even begin to create a consensus on one goal? Apart from setting a common goal, collaboration requires good teamwork. I’m fascinated by teamwork. When it works, it’s beautiful, it’s productive, it’s art. I’m sure there are lots of teamwork coaches who specialize in consulting teams to improve their dynamic at companies, but I’m skeptic they have understood how to consistently create it anywhere. Teamwork either works, or it doesn’t. And even when it works, it could all fall apart when you change a tiny thing, like add a member or change the environment. I have had great teamwork with fellow programmers, and bad teamwork. I’ve had great fellow musicians in a band, and difficult musical relationships. But the example I want to write this time is in gaming. I play Apex Legends weekly, and it’s a visceral dynamic of a team of 3 players coordinating at a fast pace, handling many variables at once, as a good e-sport should be. Unlike other battle royale shooter games, teamwork in Apex is vital. You can be good at shooting, but if you’re bad at collaborating, you’re not going to survive in Apex. You can be bad at shooting, but if you’re good at collaborating, you and your team usually end up okay. Typically, these teams are formed… randomly. With strangers online. Sometimes you can audio chat with them too (it’s usually toxic complaints). If you’re a believer in mass collaboration, you would bet that soon enough you would stumble upon strangers that form a good team with you. Nope, you don’t. For two reasons: (1) After playing this for 3 years, I’ve learned that you have to learn about your team mates styles, and this can only happen if you consistently play with the same folks. So playing with random people makes that impossible. (2) If you do stumble upon a stranger who is a great player, they probably… don’t think the same about you. The large supply of players on the internet means that it’s better to move on and try new team players, than to keep playing with someone with worse skills than you. This means that everyone who plays Apex with strangers is just constantly on the outlook of great team mates, and those great players don’t want to commit to anyone who is worse than them. You’re left with 3 options: either you keep on playing with mediocre players, and rarely win; or you climb up to the top 100 players, become a pro and develop friendships with the pros; or you play only with friends who want to commit to teamwork with you. The latter has been my case. I’ve been playing exclusively with friends, and the dynamic has been similar to that of a team of programmers or musicians. And it’s still not easy, we have a hard time agreeing on our goals, and in Apex the goals and next steps change all the time. Every 5 seconds there is a different goal, and yes we disagree on what those goals should be. Mass collaboration does not happen in Apex, because average players don’t have incentive to learn teamwork with other average players, and because the pros are such a small group that we can’t even call that “mass” anymore. Prolific creators Similar effects happen elsewhere on the internet. It’s been more than 5 years now that I work full-time on open source projects. Some people seem to believe that there is a “community of open source coders” collaborating on building software. That is far from the truth. Open source is built by exceptional individuals, and tweaked by everyone else. A small group of prolific programmers do the hard job of building 80% of the code, and a crowd of other programmers take care of the 20%, comprising usually highly specific bug fixes, documentation improvements, issue reporting and outreach. This 80-20 rule is also known as the Pareto principle and it permeates the internet. Another similar principle is the 1%-9%-90% rule, which says that in internet communities, 1% of users are active creators, 9% are occasional contributors, and 90% are lurkers. The exact numbers vary from case to case, but it stands true that lurkers are approximately an order of magnitude more than occasional contributors, who in turn are an order of magnitude more than active creators. I’ve written about this before and gave some examples from Wikipedia, YouTube, Mastodon, and Tor. It’s one of the defining aspects of the internet. A Wikipedia article is usually kickstarted by one person, not as a team collaboration. After publication, an army of occasional nitpickers (the contributors) enjoy finding and correcting small mistakes in the articles. The consumers of the article are a much larger group than the army of contributors. As a general rule, the role of active creators or volunteers is to create content or to support the existence of the system. The role of contributors is corrective and supportive, either they are suggest small fixes or they are boosting the active creators with retweets and upvotes. And the lurkers are basically invisible, you don’t hear much from there. Interaction The role of interaction or collaboration is not central to this dynamic. Interaction between prolific creators is somewhat common, but not an indispensable aspect of content creation and propagation on the internet. Take out partnerships between creators, and there’s still a lot of creators producing content independently. But if you would take out all of these prolific creators and leave content creation on the hands of millions of occasional contributors and lurkers, then you end up with an internet that is vastly smaller in quantity AND worse in quality. That is how much the internet leans on prolific creators. Don’t get me wrong, though, interaction is extremely common, in fact it’s the majority of activity on the internet. But it’s usually not collaboration, instead it’s conversation, requests, debates, disagreements, and trolling. Could it be said that the open source community is collaborating towards the common goal of providing quality software for everyone to use? Hardly. Programmers publishing open source projects have vastly differing objectives. Some want to show off their hobby projects. Some want attention and marketing. Others don’t have a clear goal, they just put projects on GitHub. “Quality software for all” ends up being an incidental result. Could it be said that the ecosystem of open source tools depend on each other, build on the successes of each other, and thus are collaborating? Maybe. But they do so as consumers and producers of each other’s content, not as co-creators. It’s market dynamics of supply and demand, not teamwork dynamics. Could it be said that reviewers on Trip Advisor and Amazon are collaborating towards curating the best hotels, restaurants, and products? Maybe. But they’re doing so by interacting with the rules of the system: they feed the algorithm with structured inputs, and it’s the algorithm that coordinates the curation of the best services and products. Unassisted by algorithms, there is little to no human-to-human coordination involved. Could it be said that the army of nitpickers is truly collaborating towards the common goal of factually correcting content on the internet? Maybe, yeah. But there is very little interaction with others necessary for an individual to spot a mistake and correct it. The work of factually correcting articles can be parallelized at scale. That said, nitpicking at a global scale is not the utopic vision that we think of when we talk about “mass collaboration on the internet”. Mass visibility What the internet has actually provided is scale and mass visibility to creators. When you bring everyone online, it’s a lot of people, it’s billions. And even though prolific people are rare, say 1 in every 1000 persons, then if there are 5 billion people online, that means 5 million prolific creators, which is a LOT. So much that it sustains all the 5 billion with a lot of interesting content, every day. Before the internet, prolific people didn’t have global reach, and were limited to their local communities. While the internet has allowed more collaboration to take place, the internet has not caused collaboration. It takes teamwork, shared goals, and relationships that work. Finding like-minded people for collaboration is one thing the internet helped us, but it takes So. Much. More. Than. That. We’re still pretty bad at agreeing on goals, learning how other people like to work, and adapting to that in a productive manner. And the internet is not going to change that. Not at small scale, and especially not at mass scale.]]></summary></entry><entry><title type="html">Some people want to run their own servers</title><link href="https://staltz.com/some-people-want-to-run-their-own-servers.html" rel="alternate" type="text/html" title="Some people want to run their own servers" /><published>2022-01-08T00:00:00+02:00</published><updated>2022-01-08T00:00:00+02:00</updated><id>https://staltz.com/some-people-want-to-run-their-own-servers</id><content type="html" xml:base="https://staltz.com/some-people-want-to-run-their-own-servers.html"><![CDATA[<p>This post is a reply to Moxie’s recent article <a href="https://moxie.org/2022/01/07/web3-first-impressions.html">“My first impressions on web3”</a>. Although I am writing a criticism, I am grateful for Moxie’s post because it exposes several truths that underlie web3, which most people are not willing to see. There is a surprising amount of centralization actively brewing in the “decentralized” space and most of its advocates don’t seem concerned. I think money (especially loads of money) blinds people to the uncomfortable truths that Moxie made explicit.</p>

<p>At the same time, Moxie referred to <em>other</em> uncomfortable truths but somehow didn’t address them directly. This is why I felt compelled to write. I liked the honesty of his post but I want an even more honest discussion. Some claims he wrote are factually wrong and sound like mere ideology. Let’s begin with the first one:</p>

<blockquote>
  <p>People don’t want to run their own servers, and never will.</p>
</blockquote>

<p>This cannot be factually true, <em>some people want to run their own servers</em>. What is the <a href="https://github.com/awesome-selfhosted/awesome-selfhosted">thriving self-hosted open source community</a> doing, other than running their own servers? Are the people running <a href="https://instances.social">Mastodon instances</a>, <a href="https://github.com/YunoHost/yunohost">Yunohost apps</a> etc not people?</p>

<p>To be fair, perhaps what Moxie meant is that a system cannot work if it requires <strong>all users</strong> to run their <strong>own</strong> servers. But that’s not what he meant, because he subsequently stated “Even nerds do not want to run their own servers”, and I know that many nerds do.</p>

<p>As a general truth when dealing with the internet audience, you can’t homogenize them, because there is always <em>someone</em> interested in the weirdest and most niche topic. “People don’t want X on the internet” is <em>never</em> a true statement. What you <em>can</em> talk about are <strong>percentages at scale</strong>.</p>

<p>I assume Moxie is familiar with the Pareto principle or internet’s <a href="https://en.wikipedia.org/wiki/1%25_rule_(Internet_culture)">90-9-1 rule</a>, which is a simple rule of thumb to approximate the number of contributors in internet societies. It basically says that 90% of users in a system are passive consumers of content and don’t post or write almost at all, 9% are sporadic contributors, while 1% are power users and creators. There are some empirical studies that confirm it, give or take a few percentage points. I also did my own empirical sample and shared it in a <a href="https://twitter.com/andrestaltz/status/1291091805740699654">Twitter thread</a>.</p>

<p>To summarize:</p>

<ul>
  <li>On YouTube, 2 billion MAUs but only 15 million (0.75%) active creators</li>
  <li>On Wikipedia, 39 million registered users but only 128 thousand (0.3%) active contributors</li>
  <li>On Mastodon, 1 million active users but only 2 thousand (0.2%) instances</li>
  <li>On Tor, 2.5 millions users but only 6 thousand (0.24%) relay servers</li>
</ul>

<p>The last two examples are the most pertinent to Moxie’s take. Hosting a server is firmly a very rare decision that users take, yet the whole system depends entirely on that minority of rare individuals. It’s compelling to look at the non-hosting users, conclude that they are a smashing majority (99.8% !!!!), and make a sweeping conclusion that <strong>no one</strong> wants to host their own servers. Yet it’s precisely because of those 0.2% that the system is capable of existing. Take them away, and the whole system dies.</p>

<p>Thus saying “<em>People don’t want to run their own servers</em>” is akin to saying “<em>People don’t want to start their own YouTube channel</em>”. Both sentences contain the same amount of statistical bullshit.</p>

<p>Now, there are certainly varying <em>degrees</em> of how much people <em>want</em> to become active contributors in a system, and it’s apparent from the list above: it’s easier for a YouTube user to become an active creator (0.75%) than it is for a Mastodon user to host their own instance (0.2%), almost 4x easier. I’d like to see the numbers for email hosting too, because I believe that hosting emails is much harder than hosting a Mastodon instance.</p>

<p>And that’s precisely the type of conversation we <strong>can</strong> honestly have: how to make systems more compelling for participation and how to help more users become active contributors. (Spoilers: the answer is typically improving user experience and ease of use) But in nearly all these systems the number of active contributors remains at 1% or lower, yet we still can’t dismiss them as a <em>meaningless minority</em>. Quite the contrary, they are the minority that imbue the system with <strong>all</strong> of its meaning.</p>

<p>So it’s fair to say that some people actually want to host their own servers.</p>

<blockquote>
  <p>A protocol moves much more slowly than a platform</p>
</blockquote>

<p>This I can agree with. I work with/on a protocol, and it’s significantly slower. But it’s an ideological take — and patently a Silicon Valley take that smells like Zuckerberg’s “move fast and break things” — where change is synonymous to progress and where immutability is considered problematic and failure. This is quite clear from Moxie’s post as he says “That is a problem for technology” and “if you don’t keep up you will fail”.</p>

<p>As you can see, I disagree, but I’ll back it up with examples. Immutability creates dependability, while moving fast creates flexibility. There are systems that should be dependable, and there are systems that should be flexible. To dismiss dependable systems as problematic is hypocritic, especially when such Silicon Valley companies (and now non-profits like Signal Foundation too!) are building their systems <em>on top of dependable and immutable protocols</em> such as TCP/IP and even the bullied and battle-scarred email (when most login systems are email-driven). Signal depends on slow-moving telephone number standards and for the social graph they rely on the slow-moving not-centralized local-first address book. This is not that different from the use of open source libraries to create proprietary software. The tech world critically depends on immutable protocols and open source software, yet centralized fast-moving proprietary systems accumulate all the power and glory.</p>

<p>Immutable systems aren’t necessarily only in the lower layers. End-user software should sometimes be immutable too. A concrete example is that a close friend of mine was actively using the Calendar app on iOS until one day a forced update changed the app’s user interface significantly so that my friend was forced to develop new habits and workflows. This bummed them out so much that they ended up quiting using the Calendar app entirely. They couldn’t trust it anymore <strong>because</strong> it changed. Change is not synonymous to progress, and if you lose users due to it, you might as well admit that moving fast “is a problem for technology” and “if you change it you will fail”.</p>

<p>In this aspect, the crypto community needs to be heard. One of the reasons why people put things on the blockchain is because they want it to be a dependable database.</p>

<blockquote>
  <p>After a few days, without warning or explanation, the NFT I made was removed from OpenSea</p>
</blockquote>

<p>Moxie’s reaction to OpenSea’s abuse of power as a centralized platform is the paradox in his blog post. He described the takedown as a negative event, yet at the same time he says:</p>

<blockquote>
  <p>This isn’t a complaint about OpenSea or an indictment of what they’ve built. Just the opposite, they’re trying to build something that works.</p>
</blockquote>

<p>So which one is it? Is “something that works” by definition also a force for censorship? Or is there actually a problem and a deep discomfort when your content is unilaterally taken down by the platform gods, no reason given and no right to dispute it?</p>

<p>I find this paradox fascinating, because it simultaneously tells us that the crypto community isn’t that serious about censorship resistance in cases like OpenSea, while it tells us that Moxie (who works for privacy and censorship resistance) admits that centralization is bad for censorship resistance.</p>

<p>There are lots of other things I could point out from his blog post, such as the opinion that the smartphones cannot become servers (my project, <a href="https://manyver.se">Manyverse</a>, is precisely that, and people are increasingly using it), or that “distributing trust” is somehow incompatible with “distributing infrastructure” (why not both?), but I don’t want to bore the reader.</p>

<p>Finally, one of Moxie’s points which I firmly agreed with, and I think needs to be heard with self-honesty by crypto proponents was:</p>

<blockquote>
  <p>The people at the end of the line who are flipping NFTs do not fundamentally care about distributed trust models or payment mechanics, but they care about where the money is.</p>
</blockquote>

<p>The crypto community has to ask themselves whether they want decentralization or money. Sometimes they can have both, but at some point, they’ll be forced to make a critical choice between one or the other, and that’s how we can know what is the primary value upheld by the community. Similarly, to Moxie and the Signal team, you have to ask yourselves whether you want privacy / censorship resistance or centralization. You too will be forced by circumstances to make a choice.</p>]]></content><author><name>André Staltz</name></author><category term="blog" /><summary type="html"><![CDATA[This post is a reply to Moxie’s recent article “My first impressions on web3”. Although I am writing a criticism, I am grateful for Moxie’s post because it exposes several truths that underlie web3, which most people are not willing to see. There is a surprising amount of centralization actively brewing in the “decentralized” space and most of its advocates don’t seem concerned. I think money (especially loads of money) blinds people to the uncomfortable truths that Moxie made explicit. At the same time, Moxie referred to other uncomfortable truths but somehow didn’t address them directly. This is why I felt compelled to write. I liked the honesty of his post but I want an even more honest discussion. Some claims he wrote are factually wrong and sound like mere ideology. Let’s begin with the first one: People don’t want to run their own servers, and never will. This cannot be factually true, some people want to run their own servers. What is the thriving self-hosted open source community doing, other than running their own servers? Are the people running Mastodon instances, Yunohost apps etc not people? To be fair, perhaps what Moxie meant is that a system cannot work if it requires all users to run their own servers. But that’s not what he meant, because he subsequently stated “Even nerds do not want to run their own servers”, and I know that many nerds do. As a general truth when dealing with the internet audience, you can’t homogenize them, because there is always someone interested in the weirdest and most niche topic. “People don’t want X on the internet” is never a true statement. What you can talk about are percentages at scale. I assume Moxie is familiar with the Pareto principle or internet’s 90-9-1 rule, which is a simple rule of thumb to approximate the number of contributors in internet societies. It basically says that 90% of users in a system are passive consumers of content and don’t post or write almost at all, 9% are sporadic contributors, while 1% are power users and creators. There are some empirical studies that confirm it, give or take a few percentage points. I also did my own empirical sample and shared it in a Twitter thread. To summarize: On YouTube, 2 billion MAUs but only 15 million (0.75%) active creators On Wikipedia, 39 million registered users but only 128 thousand (0.3%) active contributors On Mastodon, 1 million active users but only 2 thousand (0.2%) instances On Tor, 2.5 millions users but only 6 thousand (0.24%) relay servers The last two examples are the most pertinent to Moxie’s take. Hosting a server is firmly a very rare decision that users take, yet the whole system depends entirely on that minority of rare individuals. It’s compelling to look at the non-hosting users, conclude that they are a smashing majority (99.8% !!!!), and make a sweeping conclusion that no one wants to host their own servers. Yet it’s precisely because of those 0.2% that the system is capable of existing. Take them away, and the whole system dies. Thus saying “People don’t want to run their own servers” is akin to saying “People don’t want to start their own YouTube channel”. Both sentences contain the same amount of statistical bullshit. Now, there are certainly varying degrees of how much people want to become active contributors in a system, and it’s apparent from the list above: it’s easier for a YouTube user to become an active creator (0.75%) than it is for a Mastodon user to host their own instance (0.2%), almost 4x easier. I’d like to see the numbers for email hosting too, because I believe that hosting emails is much harder than hosting a Mastodon instance. And that’s precisely the type of conversation we can honestly have: how to make systems more compelling for participation and how to help more users become active contributors. (Spoilers: the answer is typically improving user experience and ease of use) But in nearly all these systems the number of active contributors remains at 1% or lower, yet we still can’t dismiss them as a meaningless minority. Quite the contrary, they are the minority that imbue the system with all of its meaning. So it’s fair to say that some people actually want to host their own servers. A protocol moves much more slowly than a platform This I can agree with. I work with/on a protocol, and it’s significantly slower. But it’s an ideological take — and patently a Silicon Valley take that smells like Zuckerberg’s “move fast and break things” — where change is synonymous to progress and where immutability is considered problematic and failure. This is quite clear from Moxie’s post as he says “That is a problem for technology” and “if you don’t keep up you will fail”. As you can see, I disagree, but I’ll back it up with examples. Immutability creates dependability, while moving fast creates flexibility. There are systems that should be dependable, and there are systems that should be flexible. To dismiss dependable systems as problematic is hypocritic, especially when such Silicon Valley companies (and now non-profits like Signal Foundation too!) are building their systems on top of dependable and immutable protocols such as TCP/IP and even the bullied and battle-scarred email (when most login systems are email-driven). Signal depends on slow-moving telephone number standards and for the social graph they rely on the slow-moving not-centralized local-first address book. This is not that different from the use of open source libraries to create proprietary software. The tech world critically depends on immutable protocols and open source software, yet centralized fast-moving proprietary systems accumulate all the power and glory. Immutable systems aren’t necessarily only in the lower layers. End-user software should sometimes be immutable too. A concrete example is that a close friend of mine was actively using the Calendar app on iOS until one day a forced update changed the app’s user interface significantly so that my friend was forced to develop new habits and workflows. This bummed them out so much that they ended up quiting using the Calendar app entirely. They couldn’t trust it anymore because it changed. Change is not synonymous to progress, and if you lose users due to it, you might as well admit that moving fast “is a problem for technology” and “if you change it you will fail”. In this aspect, the crypto community needs to be heard. One of the reasons why people put things on the blockchain is because they want it to be a dependable database. After a few days, without warning or explanation, the NFT I made was removed from OpenSea Moxie’s reaction to OpenSea’s abuse of power as a centralized platform is the paradox in his blog post. He described the takedown as a negative event, yet at the same time he says: This isn’t a complaint about OpenSea or an indictment of what they’ve built. Just the opposite, they’re trying to build something that works. So which one is it? Is “something that works” by definition also a force for censorship? Or is there actually a problem and a deep discomfort when your content is unilaterally taken down by the platform gods, no reason given and no right to dispute it? I find this paradox fascinating, because it simultaneously tells us that the crypto community isn’t that serious about censorship resistance in cases like OpenSea, while it tells us that Moxie (who works for privacy and censorship resistance) admits that centralization is bad for censorship resistance. There are lots of other things I could point out from his blog post, such as the opinion that the smartphones cannot become servers (my project, Manyverse, is precisely that, and people are increasingly using it), or that “distributing trust” is somehow incompatible with “distributing infrastructure” (why not both?), but I don’t want to bore the reader. Finally, one of Moxie’s points which I firmly agreed with, and I think needs to be heard with self-honesty by crypto proponents was: The people at the end of the line who are flipping NFTs do not fundamentally care about distributed trust models or payment mechanics, but they care about where the money is. The crypto community has to ask themselves whether they want decentralization or money. Sometimes they can have both, but at some point, they’ll be forced to make a critical choice between one or the other, and that’s how we can know what is the primary value upheld by the community. Similarly, to Moxie and the Signal team, you have to ask yourselves whether you want privacy / censorship resistance or centralization. You too will be forced by circumstances to make a choice.]]></summary></entry><entry><title type="html">Rust for Mobile? Not yet</title><link href="https://staltz.com/rust-for-mobile-not-yet.html" rel="alternate" type="text/html" title="Rust for Mobile? Not yet" /><published>2021-10-28T00:00:00+03:00</published><updated>2021-10-28T00:00:00+03:00</updated><id>https://staltz.com/rust-for-mobile-not-yet</id><content type="html" xml:base="https://staltz.com/rust-for-mobile-not-yet.html"><![CDATA[<p>It’s been 1 year and 1 month since <a href="https://viewer.scuttlebot.io/%25ce80ayDLE4rDCdVHER8KzAEC3QoQZApUoGho9uAY69o%3D.sha256">I announced ssb-neon on SSB</a> as an effort to gradually migrate the <a href="https://github.com/ssbc">SSB</a> tech stack from JS to Rust. I learned a lot about the technical details of actually doing this in production (in <a href="https://manyver.se">Manyverse</a>) and have some lessons to share.</p>

<h2 id="summary">Summary</h2>

<p>Since Manyverse version 0.2110.5, all Rust libraries have been removed. This was a sad decision that I had to take, for various technical reasons that I’ll explain below.</p>

<p><a href="https://github.com/ssb-rsjs/ssb-rsjs">ssb-neon, renamed to ssb-rsjs</a> (to decouple ourselves from the <a href="https://neon-bindings.com/">Neon</a> library in specific), was also supposed to be a community effort. I thought people would spontaneously contribute a Rust variant of simple libraries like <a href="https://github.com/ssbc/ssb-ref">ssb-ref</a>, <a href="https://github.com/ssbc/ssb-serve-blobs">ssb-serve-blobs</a>, etc, because I trusted in the spontaneous and modular contribution model that powered the <a href="pull-stream.github.io/">pull-stream community</a> and the <a href="https://github.com/callbag/callbag/wiki">callbag community</a>. To my surprise, apart from the two initial ones I built, no one else made an ssb-rsjs library. <a href="https://viewer.scuttlebot.io/@MRiJ+CvDnD9ZjqunY1oy6tsk0IdbMDC4Q3tTC8riS3s=.ed25519">@Daan</a> (if I remember correctly) tried to start one, and <a href="https://github.com/ssb-ngi-pointer/ssb-validate2-rsjs">@glyph built one for ssb-validate2</a> under the SSB NGI Pointer project, but there wasn’t any new module from the original ssb-rsjs list.</p>

<h2 id="the-good">The good</h2>

<p>There were some concerns expressed that the frequent back-and-forth between JS (V8) and Rust would be a problem for performance, but that didn’t turn out to be a measurable problem, at all. In most cases, there was a measurable speed up (10% – 25%).</p>

<p>Programming in Rust has been relatively (in my experience) straightforward and translation from JS concepts to Rust not too hard to do. It seemed like a matter of “just doing it”. And as far as I see with glyph’s work, <a href="https://github.com/infinyon/node-bindgen">node-bindgen</a> was even more dev friendly than Neon. It felt like we just needed to do that for all components in SSB and we’d be done.</p>

<h2 id="the-bad">The bad</h2>

<ul>
  <li>Large compilation times</li>
  <li>Large binary sizes</li>
  <li>Non-shared binary dependencies</li>
</ul>

<p>From the beginning, it’s obvious that the Rust compiler spends a lot of time spinning your fans, and it may take ~3 min to get one simple library (such as <a href="https://github.com/staltz/ssb-keys-neon">ssb-keys-neon</a>) to compile. Multiply that duration with the number of different architectures supported (at least armv7 and armv8) and number of ssb-rsjs libraries, and suddenly it becomes a big deal to wait for Manyverse to fully compile. This sometimes affected development speed because some coding required re-compiling. Most coding didn’t require re-compiling, but when it did, it felt really slow. I understand that the Rust compiler can cache most built dependencies in the compilation, but when you’re dealing with esoteric dev environments such as <a href="https://github.com/nodejs-mobile/">nodejs-mobile</a>, Android Gradle, and XCode, I really have no idea how to enable caching.</p>

<p>The binary sizes turned out to be quite concerning as well. Here are some example sizes of binary dependencies shipped in Manyverse 0.2108.2 (in bold are the ssb-rsjs dependencies):</p>

<ul>
  <li>bufferutil: 10 kB</li>
  <li>sodium-native: 568 kB</li>
  <li>leveldown: 450 kB</li>
  <li><strong>ssb-keys-neon: 3,11 MB</strong></li>
  <li><strong>ssb-keys-mnemonic-neon: 2,96 MB</strong></li>
</ul>

<p>All things considered, shipping extra 6 MB is not a big deal. The problem is if you consider all the modules we wanted to convert from JS to Rust, it becomes many modules. For example, according to the original list on the ssb-neon repo, there would be 22 modules. If you count that each one would be 3 MB, then the total would be 66 MB. It would probably mean that the APK size for Manyverse would be greater than 100 MB, which for some users begins to be a no no.</p>

<p>The underlying problem there is that these binaries have a bunch of dependencies, but they don’t share the dependencies. For instance, it’s common for Rust crates to have dependencies such as <code class="language-plaintext highlighter-rouge">base64</code>, <code class="language-plaintext highlighter-rouge">byteorder</code>, <code class="language-plaintext highlighter-rouge">cfg-if</code>, <code class="language-plaintext highlighter-rouge">libc</code>, <code class="language-plaintext highlighter-rouge">memchr</code>, <code class="language-plaintext highlighter-rouge">rand</code>, <code class="language-plaintext highlighter-rouge">serde</code>, <code class="language-plaintext highlighter-rouge">thread_local</code>, etc, which means that <strong>each ssb-rsjs binary</strong> would ship their own copy of these dependencies. Ideally they would be deduplicated. Maybe this is possible, maybe the dependencies can be compiled as dynamic libraries, but I have no idea how to configure that, and tie all of that together. (Reminder: I do all this through nodejs-mobile, Android Gradle, and XCode) If you’re reading this and you know the solution, please help.</p>

<p>Even if dependencies would be shared, one would have to take into account different versions of those dependencies, because library A may need dependency X at 1.1.0 while library B needs X at 2.3.0. I am not sure what would the total binary dependency tree add up in storage costs, but let’s say that above 30 MB total would be bad.</p>

<p>According to the <a href="https://github.com/ssb-rsjs/ssb-rsjs/blob/master/PLAN.md">ssb-rsjs plan</a> split into four “horizons”, this means that executing Horizon 2 is prohibitive and we would need to skip directly from Horizon 1 to Horizon 3, which means a full rewrite that comes with a lot of to-be-polished corner cases and probably would show up to end-users as bugs and crashes. In essence, it’s hard to execute a <em>gradual migration</em> from Node.js to Rust.</p>

<h2 id="the-ugly">The ugly</h2>

<p>The above were not the real deal breakers though. The worst seemed to be that Rust hasn’t matured as a choice for <em>mobile development</em>. I’m sure it’s a great choice for embedded, for servers and desktops, but the mobile support is quite experimental.</p>

<ul>
  <li>Unreliable iOS support</li>
  <li>Deal-breaker changes from Apple</li>
  <li>No support for Android 5.0</li>
</ul>

<p>For iOS, the Rust ecosystem went back and forth whether to support dynamic linking on iOS (see <a href="https://github.com/rust-lang/rust/pull/73516">73516</a> and <a href="https://github.com/rust-lang/rust/pull/77716">77716</a>), and it’s still not resolved (see <a href="https://github.com/rust-lang/cargo/issues/4881">cargo 4881</a>).</p>

<p>Worse was when <a href="https://developer.apple.com/forums/thread/655588?answerId=665804022#665804022">Apple introduced breaking changes to library linkage on macOS Big Sur</a>, essentially replacing <code class="language-plaintext highlighter-rouge">dylib</code> files in the SDK with <code class="language-plaintext highlighter-rouge">tbd</code> stub libraries, making it impossible (we have not found a solution) to build ssb-rsjs libraries on Big Sur. Apple, as usual, puts the burden on third-party tool developers (i.e. Rust and Cargo devs) to “adapt to this new reality”. In practice this meant that I had to avoid at all costs updating my macOS to Big Sur, otherwise <a href="https://gitlab.com/staltz/manyverse/-/issues/1371">I wouldn’t be capable of compiling ssb-rsjs libraries for Manyverse iOS</a>.</p>

<p>Another deal breaker was on the Android side. All users with Android 5.0 and 5.1 (e.g. many in Myanmar) experienced <a href="https://gitlab.com/staltz/manyverse/-/issues/1400">crashes</a> when trying to open Manyverse containing ssb-rsjs libraries. The crash is related to the Rust compiler, the Android NDK, and how Node.js Mobile ties all this together, and I wish I had the (C++, NDK, Rust, linkers) competence to fix it, but I don’t. And this is not your average StackOverflow-answerable issue, it requires knowledge of a lot of different technologies working in concert. Android was never meant to support Node.js Mobile and Google only announced official support for Rust in the NDK this year, so it’s early stages for Rust on Android. Put all these three together and you get headaches.</p>

<h2 id="conclusion">Conclusion</h2>

<p>Removing Rust libraries from Manyverse dropped the total size of the app, increased its support across OS versions, improved compilation times, and I haven’t heard of any user complaining that it got slower.</p>

<p>It’s a shame, really, I would love to have a highly-efficient backend for the app, and I think performance is a big deal. But I think the road to get there is not gradual, and it’s not Rust. Maybe Rust will have first-class support on iOS and Android, but 2021 is not the year to place all your bets on that, yet.</p>

<p>If I were to start from scratch, and assuming unlimited budget I would probably build the mobile tech stack in ObjectiveC for iOS and Kotlin (or Java) for Android, because those are guaranteed to have first-class support by Apple and Google, and they have great performance too (I have a hard time believing that Rust on mobile would be, all things considered from an end-user perspective, faster than the 1st-class mobile languages, given all the optimizations and tight integration for the 1st-class languages).</p>]]></content><author><name>André Staltz</name></author><category term="blog" /><summary type="html"><![CDATA[It’s been 1 year and 1 month since I announced ssb-neon on SSB as an effort to gradually migrate the SSB tech stack from JS to Rust. I learned a lot about the technical details of actually doing this in production (in Manyverse) and have some lessons to share. Summary Since Manyverse version 0.2110.5, all Rust libraries have been removed. This was a sad decision that I had to take, for various technical reasons that I’ll explain below. ssb-neon, renamed to ssb-rsjs (to decouple ourselves from the Neon library in specific), was also supposed to be a community effort. I thought people would spontaneously contribute a Rust variant of simple libraries like ssb-ref, ssb-serve-blobs, etc, because I trusted in the spontaneous and modular contribution model that powered the pull-stream community and the callbag community. To my surprise, apart from the two initial ones I built, no one else made an ssb-rsjs library. @Daan (if I remember correctly) tried to start one, and @glyph built one for ssb-validate2 under the SSB NGI Pointer project, but there wasn’t any new module from the original ssb-rsjs list. The good There were some concerns expressed that the frequent back-and-forth between JS (V8) and Rust would be a problem for performance, but that didn’t turn out to be a measurable problem, at all. In most cases, there was a measurable speed up (10% – 25%). Programming in Rust has been relatively (in my experience) straightforward and translation from JS concepts to Rust not too hard to do. It seemed like a matter of “just doing it”. And as far as I see with glyph’s work, node-bindgen was even more dev friendly than Neon. It felt like we just needed to do that for all components in SSB and we’d be done. The bad Large compilation times Large binary sizes Non-shared binary dependencies From the beginning, it’s obvious that the Rust compiler spends a lot of time spinning your fans, and it may take ~3 min to get one simple library (such as ssb-keys-neon) to compile. Multiply that duration with the number of different architectures supported (at least armv7 and armv8) and number of ssb-rsjs libraries, and suddenly it becomes a big deal to wait for Manyverse to fully compile. This sometimes affected development speed because some coding required re-compiling. Most coding didn’t require re-compiling, but when it did, it felt really slow. I understand that the Rust compiler can cache most built dependencies in the compilation, but when you’re dealing with esoteric dev environments such as nodejs-mobile, Android Gradle, and XCode, I really have no idea how to enable caching. The binary sizes turned out to be quite concerning as well. Here are some example sizes of binary dependencies shipped in Manyverse 0.2108.2 (in bold are the ssb-rsjs dependencies): bufferutil: 10 kB sodium-native: 568 kB leveldown: 450 kB ssb-keys-neon: 3,11 MB ssb-keys-mnemonic-neon: 2,96 MB All things considered, shipping extra 6 MB is not a big deal. The problem is if you consider all the modules we wanted to convert from JS to Rust, it becomes many modules. For example, according to the original list on the ssb-neon repo, there would be 22 modules. If you count that each one would be 3 MB, then the total would be 66 MB. It would probably mean that the APK size for Manyverse would be greater than 100 MB, which for some users begins to be a no no. The underlying problem there is that these binaries have a bunch of dependencies, but they don’t share the dependencies. For instance, it’s common for Rust crates to have dependencies such as base64, byteorder, cfg-if, libc, memchr, rand, serde, thread_local, etc, which means that each ssb-rsjs binary would ship their own copy of these dependencies. Ideally they would be deduplicated. Maybe this is possible, maybe the dependencies can be compiled as dynamic libraries, but I have no idea how to configure that, and tie all of that together. (Reminder: I do all this through nodejs-mobile, Android Gradle, and XCode) If you’re reading this and you know the solution, please help. Even if dependencies would be shared, one would have to take into account different versions of those dependencies, because library A may need dependency X at 1.1.0 while library B needs X at 2.3.0. I am not sure what would the total binary dependency tree add up in storage costs, but let’s say that above 30 MB total would be bad. According to the ssb-rsjs plan split into four “horizons”, this means that executing Horizon 2 is prohibitive and we would need to skip directly from Horizon 1 to Horizon 3, which means a full rewrite that comes with a lot of to-be-polished corner cases and probably would show up to end-users as bugs and crashes. In essence, it’s hard to execute a gradual migration from Node.js to Rust. The ugly The above were not the real deal breakers though. The worst seemed to be that Rust hasn’t matured as a choice for mobile development. I’m sure it’s a great choice for embedded, for servers and desktops, but the mobile support is quite experimental. Unreliable iOS support Deal-breaker changes from Apple No support for Android 5.0 For iOS, the Rust ecosystem went back and forth whether to support dynamic linking on iOS (see 73516 and 77716), and it’s still not resolved (see cargo 4881). Worse was when Apple introduced breaking changes to library linkage on macOS Big Sur, essentially replacing dylib files in the SDK with tbd stub libraries, making it impossible (we have not found a solution) to build ssb-rsjs libraries on Big Sur. Apple, as usual, puts the burden on third-party tool developers (i.e. Rust and Cargo devs) to “adapt to this new reality”. In practice this meant that I had to avoid at all costs updating my macOS to Big Sur, otherwise I wouldn’t be capable of compiling ssb-rsjs libraries for Manyverse iOS. Another deal breaker was on the Android side. All users with Android 5.0 and 5.1 (e.g. many in Myanmar) experienced crashes when trying to open Manyverse containing ssb-rsjs libraries. The crash is related to the Rust compiler, the Android NDK, and how Node.js Mobile ties all this together, and I wish I had the (C++, NDK, Rust, linkers) competence to fix it, but I don’t. And this is not your average StackOverflow-answerable issue, it requires knowledge of a lot of different technologies working in concert. Android was never meant to support Node.js Mobile and Google only announced official support for Rust in the NDK this year, so it’s early stages for Rust on Android. Put all these three together and you get headaches. Conclusion Removing Rust libraries from Manyverse dropped the total size of the app, increased its support across OS versions, improved compilation times, and I haven’t heard of any user complaining that it got slower. It’s a shame, really, I would love to have a highly-efficient backend for the app, and I think performance is a big deal. But I think the road to get there is not gradual, and it’s not Rust. Maybe Rust will have first-class support on iOS and Android, but 2021 is not the year to place all your bets on that, yet. If I were to start from scratch, and assuming unlimited budget I would probably build the mobile tech stack in ObjectiveC for iOS and Kotlin (or Java) for Android, because those are guaranteed to have first-class support by Apple and Google, and they have great performance too (I have a hard time believing that Rust on mobile would be, all things considered from an end-user perspective, faster than the 1st-class mobile languages, given all the optimizations and tight integration for the 1st-class languages).]]></summary></entry><entry><title type="html">Software below the poverty line</title><link href="https://staltz.com/software-below-the-poverty-line.html" rel="alternate" type="text/html" title="Software below the poverty line" /><published>2019-06-13T00:00:00+03:00</published><updated>2019-06-13T00:00:00+03:00</updated><id>https://staltz.com/software-below-the-poverty-line</id><content type="html" xml:base="https://staltz.com/software-below-the-poverty-line.html"><![CDATA[<p>Most people believe that <a href="https://www.fordfoundation.org/about/library/reports-and-studies/roads-and-bridges-the-unseen-labor-behind-our-digital-infrastructure/">open source sustainability is a difficult problem</a> to solve. As an open source developer myself, my own perspective to this problem was more optimistic: I believe in the donation model, for its simplicity and possibility to scale.</p>

<p>However, I recently met other open source developers that make a living from donations, and they helped widen my perspective. At Amsterdam.js, I heard <a href="https://github.com/hzoo/open-source-charity-or-business/">Henry Zhu speak about sustainability</a> in the Babel project and beyond, and it was a pretty dire picture. Later, over breakfast, Henry and I had a deeper conversation on this topic. In Amsterdam I also met up with <a href="https://github.com/wooorm">Titus</a>, who maintains the <a href="https://unified.js.org/">Unified</a> project full-time. Meeting with these people I confirmed my belief in the donation model for sustainability. It works. But, what really stood out to me was the question: is it fair?</p>

<p>I decided to collect data from OpenCollective and GitHub, and take a more scientific sample of the situation. The results I found were shocking: there were two clearly sustainable open source projects, but the majority (more than 80%) of projects that we usually consider sustainable are actually receiving income below industry standards or even below the poverty threshold.</p>

<h2 id="what-the-data-says">What the data says</h2>

<p>I picked <a href="https://opencollective.com/discover">popular open source projects</a> from OpenCollective, and selected the yearly income data from each. Then I looked up their GitHub repositories, to measure the count of stars, and how many “full-time” contributors they have had in the past 12 months. Sometimes I also looked up the Patreon pages for those few maintainers that had one, and added that data to the yearly income for the project. For instance, it is obvious that Evan You gets money on <a href="https://www.patreon.com/evanyou">Patreon to work on Vue.js</a>. These data points allowed me to measure: project <strong>popularity</strong> (a proportional indicator of the number of users), <strong>yearly revenue</strong> for the whole team, and <strong>team size</strong>.</p>

<p>It is difficult to derive exactly how many users there are for each project, specially because they may be transitive users, not aware that they are using the project. This is why I went with GitHub stars as a good enough measurement for user count, because it counts <em>persons</em> (unlike download count which can include CI computers) that are <em>conscious</em> about the project’s worth.</p>

<p>I scanned 58 projects in total, which may seem like a small number, but this was done from the most popular to the least. Popularity is very important to scale the donations, and it turns out that very few projects have enough popularity to achieve fair compensation. In other words, among these fifty most popular projects, the majority of them are below sustainability thresholds. I believe that if I would cover more data points, those would be likely less popular than these ones. This data set might be biased towards JavaScript projects on OpenCollective, but the choice for sampling OpenCollective is because it provides easy transparent data on the finances of various projects. I want to remind the reader of the existence of other popular open source projects such as Linux, nginx, VideoLAN, and others. It would be good to incorporate the financial data from those projects in this data set.</p>

<p>From GitHub data and OpenCollective, I was able to calculate how much yearly revenue for a project goes to each “full-time equivalent” contributor. This is essentially their salary. Or, said better, this is how much their salary via donations would be if they were working exclusively on the open source project, without any complementary income. It is likely that a sizable amount of creators and maintainers work only part-time on their projects. Those that work full-time sometimes complement their income with savings or by living in a country with lower costs of living, <a href="https://twitter.com/sindresorhus/status/902954660285128704">or both (Sindre Sorhus)</a>.</p>

<p>Then, based on the <a href="https://insights.stackoverflow.com/survey/2019#work-_-salary-by-developer-type">latest StackOverflow developer survey</a>, we know that the low end of developer salaries is around $40k, while the high end of developer salaries is above $100k. That range depicts the industry standard for developers, given their status as knowledge workers, many of which are living in OECD countries. This allowed me to classify the results into four categories:</p>

<ul>
  <li>BLUE: 6-figure salary</li>
  <li>GREEN: 5-figure salary within industry standards</li>
  <li>ORANGE: 5-figure salary below our industry standards</li>
  <li>RED: salary below the <a href="https://poverty.ucdavis.edu/faq/what-are-poverty-thresholds-today">official US poverty threshold</a></li>
</ul>

<p>The first chart, below, shows team size and “price” for each GitHub star.</p>

<p><a href="/img/poverty-teamsize.png"><img src="/img/poverty-teamsize.png" alt="Open source projects, income-per-star versus team size" /></a></p>

<p><strong>More than 50% of projects are red</strong>: they cannot sustain their maintainers above the poverty line. 31% of the projects are orange, consisting of developers willing to work for a salary that would be considered unacceptable in our industry. 12% are green, and only 3% are blue: Webpack and Vue.js. Income per GitHub star is important: sustainable projects generally have above $2/star. The median value, however, is $1.22/star. Team size is also important for sustainability: the smaller the team, the more likely it can sustain its maintainers.</p>

<p>The median donation per year is $217, which is substantial when understood on an individual level, but in reality includes sponsorship from companies that are doing this also for their own marketing purposes.</p>

<p>The next chart shows how revenue scales with popularity.</p>

<p><a href="/img/poverty-popularity.png"><img src="/img/poverty-popularity.png" alt="Open source projects, yearly revenue versus GitHub stars" /></a></p>

<p>You can browse the data yourself by accessing this <a href="https://datproject.org/">Dat archive</a> with a LibreOffice Calc spreadsheet:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dat://bf7b912fff1e64a52b803444d871433c5946c990ae51f2044056bf6f9655ecbf
</code></pre></div></div>

<h2 id="popularity-versus-fairness">Popularity versus fairness</h2>

<p>While popularity is key to green and blue sustainability, there are popular products in red, such as <a href="https://github.com/prettier/prettier">Prettier</a>, <a href="https://github.com/curl/curl">Curl</a>, <a href="https://github.com/jekyll/jekyll">Jekyll</a>, <del><a href="https://github.com/electron/electron">Electron</a></del> (update:) <a href="https://github.com/avajs/">AVA</a>. This doesn’t mean the people working on those projects are poor, because in several cases the maintainers have jobs at companies that allow open source contributions. What it does mean, however, is that unless companies take an active role in supporting open source with significant funding, what’s left is a situation where most open source maintainers are severely underfunded. On donations alone, open source is sustainable (fairly within industry standards) in a sweet spot: when a popular project, with a sufficiently small team, knows how to gather significant funding from a crowd of donators or sponsor organizations. Fair sustainability is sensitive to these parameters.</p>

<p>Because visibility is fundamental for donation-driven sustainability, the “invisible infrastructure” projects are often in a much worse situation that the visible ones. For instance, <a href="https://github.com/zloirock/core-js">Core-js</a> is less popular than <a href="https://github.com/babel/babel">Babel</a>, although <a href="https://babeljs.io/docs/en/next/babel-polyfill.html">it is a dependency in Babel</a>.</p>

<table style="margin: 15px 0;">
  <thead>
    <tr>
      <th>Library</th>
      <th>Used by</th>
      <th>Stars</th>
      <th>'Salary'</th>
    </tr>
  </thead>
  <tbody style="text-align:right">
    <tr>
      <td style="padding:5px;border:1px solid #cdcdcd;">Babel</td>
      <td style="padding:5px;border:1px solid #cdcdcd;">350284</td>
      <td style="padding:5px;border:1px solid #cdcdcd;">33412٭</td>
      <td style="padding:5px;border:1px solid #cdcdcd;">$70016</td>
    </tr>
    <tr>
      <td style="padding:5px;border:1px solid #cdcdcd;">Core-js</td>
      <td style="padding:5px;border:1px solid #cdcdcd;">2442712</td>
      <td style="padding:5px;border:1px solid #cdcdcd;">8702٭</td>
      <td style="padding:5px;border:1px solid #cdcdcd;">$16204</td>
    </tr>
  </tbody>
</table>

<p>Some proposed solutions have been to “trickle down” donations from the well-known projects to the least, guided by tools such as <a href="https://backyourstack.com/">BackYourStack</a> and <a href="https://github.blog/2019-05-23-announcing-github-sponsors-a-new-way-to-contribute-to-open-source/#native-to-your-github-workflow">GitHub’s new Contributor overview</a>. This would work if the well-known projects had a huge surplus to share with transitive dependencies. That is hardly possible, only Vue.js has enough surplus to share, and it could only do that with 3 or 4 other developers. Vue.js is the exception, other projects cannot afford sharing their income, because that would cause everyone involved to receive poorly.</p>

<p>In the case of Babel and Core-js, there isn’t a lot of surplus to share forwards. One of Henry Zhu’s points in his talk was precisely that the money received is already too limited. It might seem like Babel is <em>the</em> visible project in this situation, but it surprised me to hear from Henry that many people are not aware of Babel although they use it, because they might be using it as a transitive dependency.</p>

<p>From the other side of the coin, the maintainers of lower level libraries recognize the need to partner with more visible projects or <a href="https://twitter.com/wooorm/status/1062404997240012800">even merge projects</a> in order to increase overall visibility, popularity, and thus funding. This is the case of Unified by Titus, which is a project you might not have heard of, but Unified and its many packages are used in <a href="https://github.com/mdx-js/mdx/blob/deff36bebfedb3a9de0a0575ee9a1b55b9b8aa18/package.json#L20">MDX</a>, <a href="https://github.com/gatsbyjs/gatsby/blob/25d4a4dab66e04717fb09dc5edb1f7b856fc41ff/packages/gatsby-transformer-remark/package.json#L26">Gatsby</a>, <a href="https://github.com/prettier/prettier/blob/24f161db565c1a6692ee98191193d9cf9ff31d6f/package.json#L66">Prettier</a>, <a href="https://github.com/storybookjs/storybook/blob/fed2ffa5e2919220f0508e540b2eae848523fee5/package.json#L214">Storybook</a> and many others.</p>

<p>It is also not true that popular projects are financially better off than their less popular dependencies. Prettier (32k stars) uses Unified (1k stars) as a dependency, but Unified has more yearly revenue than Prettier. In fact, many of the popular projects that depend on Unified are receiving less funding per team member. But Unified itself is still receiving below industry standards, not in a situation of trickling down (or up?) that funding.</p>

<p>Other times, it’s not easy to say that when a project A is using project B, it should necessarily donate to B, because it might be that B also uses A! For instance, <a href="https://github.com/prettier/prettier/blob/24f161db565c1a6692ee98191193d9cf9ff31d6f/package.json#L19">Babel is a dependency in Prettier</a>, and <a href="https://github.com/babel/babel/blob/f92c2ae830dbb32013a36fa74facd2ef95b9947d/package.json#L59">Prettier is a dependency in Babel</a>. Probably many of the projects covered in this study have a complex web of dependencies <em>between</em> each other, that it becomes difficult to say how should money flow within these projects.</p>

<h2 id="exploitation">Exploitation</h2>

<p>The total amount of money being put into open source is not enough for all the maintainers. If we add up all of the yearly revenue from those projects in this data set, it’s $2.5 million. The median salary is approximately $9k, which is below the poverty line. If split up that money evenly, that’s roughly $22k, which is still below industry standards.</p>

<p>The core problem is not that open source projects are not sharing the money received. The problem is that, in total numbers, open source is not getting enough money. $2.5 million is not enough. To put this number into perspective, startups get typically much more than that.</p>

<p><a href="https://www.crunchbase.com/organization/tidelift">Tidelift has received $40 million</a> in funding, to “help open source creators and maintainers get fairly compensated for their work” <a href="https://tidelift.com/docs/lifting/paying">(quote)</a>. They have a <a href="https://tidelift.com/about">team of 27 people</a>, some of them ex-employees from large companies (such as Google and GitHub). They probably don’t receive the lower tier of salaries. Yet, many of the <a href="https://tidelift.com/subscription">open source projects they showcase</a> on their website are below poverty line regarding income from donations. We actually do not know how much Tidelift is giving to the maintainers of these projects, but their <a href="https://tidelift.com/subscription/pricing">subscription pricing</a> is very high. Opaqueness of price and cost structure has historically helped companies hide inequality.</p>

<p>GitHub was <a href="https://venturebeat.com/2018/06/04/microsoft-confirms-it-will-acquire-github-for-7-5-billion/">bought by Microsoft for $7.5 billion</a>. To make that quantity easier to grok, the amount of money Microsoft paid to acquire GitHub – the company – is more than <strong>3000x</strong> what the open source community is getting yearly. In other words, if the open source community saved up every penny of the money they ever received, after a couple thousand years they could perhaps have enough money to buy GitHub jointly. And now GitHub itself has its own <a href="https://www.youtube.com/watch?v=n47rCa9dxf8">Open Source Economy team</a> (how big is this team and what are their salaries?), but the new GitHub sponsors feature is far less transparent than OpenCollective. Against GitHub’s traditional culture of open data (such as the commits calendar or the contributors chart), when it comes to donations, a user cannot know how much each open source maintainer is getting. It’s opaque.</p>

<p>If Microsoft GitHub is serious about helping fund open source, they should put their money where their mouth is: donate at least $1 billion to open source projects. Even a mere $1.5 million per year would be enough to make all the projects in this study become green. The <a href="https://help.github.com/en/articles/about-github-sponsors#about-the-github-sponsors-matching-fund">Matching Fund</a> in GitHub Sponsors is not enough, it gives a maintainer at most just $5k in a year, which is not sufficient to raise the maintainer from the poverty threshold up to industry standard.</p>

<p>We now have data to say that open source creators and maintainers are receiving low income, and we have data to say that companies “helping” open source are receiving millions, and most likely top salaries. Other millionaire and billionaire companies are making profits by combining open source libraries and components to build proprietary software. I understand <a href="https://youtu.be/VBwWbFpkltg?list=PLE7tQUdRKcyaOq3HlRm9h_Q_WhWKqm5xc&amp;t=1362">DHH’s stance on <em>‘There is no tragedy’</em></a> in open source sustainability, and in fact when I watched his talk I was inclined to agree. However, the recent data I compiled – out of curiosity – showed me the default reality of open source finances, indicating a severe imbalance between work quality and compensation. Full-time maintainers are technically talented people responsible for issue management, security, navigating toxic complaints, while often receiving below the industry standards.</p>

<p>The struggle of open source sustainability is the millennium-old struggle of humanity to free itself from slavery, colonization, and exploitation. This is not the first time hard-working honest people are giving their all, for unfair compensation.</p>

<p>This is therefore not a new problem, and it does not require complicated new solutions. It is simply a version of injustice. To fix it is not a matter of receiving compassion and moral behavior from companies, for companies are fundamentally built to do something else than that. Companies simply follow some basic financial rules of society while trying to optimize for profit and/or domination.</p>

<p>Open source infrastructure is a commons, much like our ecological systems. Because our societies did not have rules to prevent the ecological systems from being exploited, companies have <a href="https://ourworldindata.org/fossil-fuels">engaged in industrialized resource extraction</a>. Over many decades this is <a href="https://ourworldindata.org/forests">depleting the environment</a>, and now we are facing a <a href="https://www.theguardian.com/environment/2019/may/17/why-the-guardian-is-changing-the-language-it-uses-about-the-environment">climate crisis</a>, <a href="https://climate.nasa.gov/">proven</a> <a href="https://archive.ipcc.ch/pdf/assessment-report/ar5/syr/SYR_AR5_FINAL_full_wcover.pdf">through scientific consensus</a> to be a <a href="https://news.un.org/en/story/2018/05/1009782">substantial threat to humanity</a> and <a href="https://www.ipbes.net/news/Media-Release-Global-Assessment">all life on the planet</a>. Open source misappropriation is simply a small version of that, with less dramatic consequences.</p>

<p>If you want to help open source become sustainable, rise up and help humanity write new rules for society, that keep power and capitalist thirst accountable for abuse. If you are wondering what that looks like, here are some initial suggestions of concrete actions to take:</p>

<ul>
  <li>Only accept jobs at companies that donate a significant portion of their profit (at least 0,5%) to open source, or companies which don’t fundamentally depend on open source for their products</li>
  <li>Donate to open source if you have a decent enough salary</li>
  <li>Don’t discard unionizing (I am writing this in Finland, where 65% of all workers are in unions)</li>
  <li>Don’t discard <a href="https://licensezero.com/">alternative licenses</a> for new projects</li>
  <li>Pressure Microsoft to donate millions to open source projects</li>
  <li>Expose the truth on how companies are behaving by publishing data studies like this one</li>
</ul>]]></content><author><name>André Staltz</name></author><category term="blog" /><summary type="html"><![CDATA[Most people believe that open source sustainability is a difficult problem to solve. As an open source developer myself, my own perspective to this problem was more optimistic: I believe in the donation model, for its simplicity and possibility to scale. However, I recently met other open source developers that make a living from donations, and they helped widen my perspective. At Amsterdam.js, I heard Henry Zhu speak about sustainability in the Babel project and beyond, and it was a pretty dire picture. Later, over breakfast, Henry and I had a deeper conversation on this topic. In Amsterdam I also met up with Titus, who maintains the Unified project full-time. Meeting with these people I confirmed my belief in the donation model for sustainability. It works. But, what really stood out to me was the question: is it fair? I decided to collect data from OpenCollective and GitHub, and take a more scientific sample of the situation. The results I found were shocking: there were two clearly sustainable open source projects, but the majority (more than 80%) of projects that we usually consider sustainable are actually receiving income below industry standards or even below the poverty threshold. What the data says I picked popular open source projects from OpenCollective, and selected the yearly income data from each. Then I looked up their GitHub repositories, to measure the count of stars, and how many “full-time” contributors they have had in the past 12 months. Sometimes I also looked up the Patreon pages for those few maintainers that had one, and added that data to the yearly income for the project. For instance, it is obvious that Evan You gets money on Patreon to work on Vue.js. These data points allowed me to measure: project popularity (a proportional indicator of the number of users), yearly revenue for the whole team, and team size. It is difficult to derive exactly how many users there are for each project, specially because they may be transitive users, not aware that they are using the project. This is why I went with GitHub stars as a good enough measurement for user count, because it counts persons (unlike download count which can include CI computers) that are conscious about the project’s worth. I scanned 58 projects in total, which may seem like a small number, but this was done from the most popular to the least. Popularity is very important to scale the donations, and it turns out that very few projects have enough popularity to achieve fair compensation. In other words, among these fifty most popular projects, the majority of them are below sustainability thresholds. I believe that if I would cover more data points, those would be likely less popular than these ones. This data set might be biased towards JavaScript projects on OpenCollective, but the choice for sampling OpenCollective is because it provides easy transparent data on the finances of various projects. I want to remind the reader of the existence of other popular open source projects such as Linux, nginx, VideoLAN, and others. It would be good to incorporate the financial data from those projects in this data set. From GitHub data and OpenCollective, I was able to calculate how much yearly revenue for a project goes to each “full-time equivalent” contributor. This is essentially their salary. Or, said better, this is how much their salary via donations would be if they were working exclusively on the open source project, without any complementary income. It is likely that a sizable amount of creators and maintainers work only part-time on their projects. Those that work full-time sometimes complement their income with savings or by living in a country with lower costs of living, or both (Sindre Sorhus). Then, based on the latest StackOverflow developer survey, we know that the low end of developer salaries is around $40k, while the high end of developer salaries is above $100k. That range depicts the industry standard for developers, given their status as knowledge workers, many of which are living in OECD countries. This allowed me to classify the results into four categories: BLUE: 6-figure salary GREEN: 5-figure salary within industry standards ORANGE: 5-figure salary below our industry standards RED: salary below the official US poverty threshold The first chart, below, shows team size and “price” for each GitHub star. More than 50% of projects are red: they cannot sustain their maintainers above the poverty line. 31% of the projects are orange, consisting of developers willing to work for a salary that would be considered unacceptable in our industry. 12% are green, and only 3% are blue: Webpack and Vue.js. Income per GitHub star is important: sustainable projects generally have above $2/star. The median value, however, is $1.22/star. Team size is also important for sustainability: the smaller the team, the more likely it can sustain its maintainers. The median donation per year is $217, which is substantial when understood on an individual level, but in reality includes sponsorship from companies that are doing this also for their own marketing purposes. The next chart shows how revenue scales with popularity. You can browse the data yourself by accessing this Dat archive with a LibreOffice Calc spreadsheet: dat://bf7b912fff1e64a52b803444d871433c5946c990ae51f2044056bf6f9655ecbf Popularity versus fairness While popularity is key to green and blue sustainability, there are popular products in red, such as Prettier, Curl, Jekyll, Electron (update:) AVA. This doesn’t mean the people working on those projects are poor, because in several cases the maintainers have jobs at companies that allow open source contributions. What it does mean, however, is that unless companies take an active role in supporting open source with significant funding, what’s left is a situation where most open source maintainers are severely underfunded. On donations alone, open source is sustainable (fairly within industry standards) in a sweet spot: when a popular project, with a sufficiently small team, knows how to gather significant funding from a crowd of donators or sponsor organizations. Fair sustainability is sensitive to these parameters. Because visibility is fundamental for donation-driven sustainability, the “invisible infrastructure” projects are often in a much worse situation that the visible ones. For instance, Core-js is less popular than Babel, although it is a dependency in Babel. Library Used by Stars 'Salary' Babel 350284 33412٭ $70016 Core-js 2442712 8702٭ $16204 Some proposed solutions have been to “trickle down” donations from the well-known projects to the least, guided by tools such as BackYourStack and GitHub’s new Contributor overview. This would work if the well-known projects had a huge surplus to share with transitive dependencies. That is hardly possible, only Vue.js has enough surplus to share, and it could only do that with 3 or 4 other developers. Vue.js is the exception, other projects cannot afford sharing their income, because that would cause everyone involved to receive poorly. In the case of Babel and Core-js, there isn’t a lot of surplus to share forwards. One of Henry Zhu’s points in his talk was precisely that the money received is already too limited. It might seem like Babel is the visible project in this situation, but it surprised me to hear from Henry that many people are not aware of Babel although they use it, because they might be using it as a transitive dependency. From the other side of the coin, the maintainers of lower level libraries recognize the need to partner with more visible projects or even merge projects in order to increase overall visibility, popularity, and thus funding. This is the case of Unified by Titus, which is a project you might not have heard of, but Unified and its many packages are used in MDX, Gatsby, Prettier, Storybook and many others. It is also not true that popular projects are financially better off than their less popular dependencies. Prettier (32k stars) uses Unified (1k stars) as a dependency, but Unified has more yearly revenue than Prettier. In fact, many of the popular projects that depend on Unified are receiving less funding per team member. But Unified itself is still receiving below industry standards, not in a situation of trickling down (or up?) that funding. Other times, it’s not easy to say that when a project A is using project B, it should necessarily donate to B, because it might be that B also uses A! For instance, Babel is a dependency in Prettier, and Prettier is a dependency in Babel. Probably many of the projects covered in this study have a complex web of dependencies between each other, that it becomes difficult to say how should money flow within these projects. Exploitation The total amount of money being put into open source is not enough for all the maintainers. If we add up all of the yearly revenue from those projects in this data set, it’s $2.5 million. The median salary is approximately $9k, which is below the poverty line. If split up that money evenly, that’s roughly $22k, which is still below industry standards. The core problem is not that open source projects are not sharing the money received. The problem is that, in total numbers, open source is not getting enough money. $2.5 million is not enough. To put this number into perspective, startups get typically much more than that. Tidelift has received $40 million in funding, to “help open source creators and maintainers get fairly compensated for their work” (quote). They have a team of 27 people, some of them ex-employees from large companies (such as Google and GitHub). They probably don’t receive the lower tier of salaries. Yet, many of the open source projects they showcase on their website are below poverty line regarding income from donations. We actually do not know how much Tidelift is giving to the maintainers of these projects, but their subscription pricing is very high. Opaqueness of price and cost structure has historically helped companies hide inequality. GitHub was bought by Microsoft for $7.5 billion. To make that quantity easier to grok, the amount of money Microsoft paid to acquire GitHub – the company – is more than 3000x what the open source community is getting yearly. In other words, if the open source community saved up every penny of the money they ever received, after a couple thousand years they could perhaps have enough money to buy GitHub jointly. And now GitHub itself has its own Open Source Economy team (how big is this team and what are their salaries?), but the new GitHub sponsors feature is far less transparent than OpenCollective. Against GitHub’s traditional culture of open data (such as the commits calendar or the contributors chart), when it comes to donations, a user cannot know how much each open source maintainer is getting. It’s opaque. If Microsoft GitHub is serious about helping fund open source, they should put their money where their mouth is: donate at least $1 billion to open source projects. Even a mere $1.5 million per year would be enough to make all the projects in this study become green. The Matching Fund in GitHub Sponsors is not enough, it gives a maintainer at most just $5k in a year, which is not sufficient to raise the maintainer from the poverty threshold up to industry standard. We now have data to say that open source creators and maintainers are receiving low income, and we have data to say that companies “helping” open source are receiving millions, and most likely top salaries. Other millionaire and billionaire companies are making profits by combining open source libraries and components to build proprietary software. I understand DHH’s stance on ‘There is no tragedy’ in open source sustainability, and in fact when I watched his talk I was inclined to agree. However, the recent data I compiled – out of curiosity – showed me the default reality of open source finances, indicating a severe imbalance between work quality and compensation. Full-time maintainers are technically talented people responsible for issue management, security, navigating toxic complaints, while often receiving below the industry standards. The struggle of open source sustainability is the millennium-old struggle of humanity to free itself from slavery, colonization, and exploitation. This is not the first time hard-working honest people are giving their all, for unfair compensation. This is therefore not a new problem, and it does not require complicated new solutions. It is simply a version of injustice. To fix it is not a matter of receiving compassion and moral behavior from companies, for companies are fundamentally built to do something else than that. Companies simply follow some basic financial rules of society while trying to optimize for profit and/or domination. Open source infrastructure is a commons, much like our ecological systems. Because our societies did not have rules to prevent the ecological systems from being exploited, companies have engaged in industrialized resource extraction. Over many decades this is depleting the environment, and now we are facing a climate crisis, proven through scientific consensus to be a substantial threat to humanity and all life on the planet. Open source misappropriation is simply a small version of that, with less dramatic consequences. If you want to help open source become sustainable, rise up and help humanity write new rules for society, that keep power and capitalist thirst accountable for abuse. If you are wondering what that looks like, here are some initial suggestions of concrete actions to take: Only accept jobs at companies that donate a significant portion of their profit (at least 0,5%) to open source, or companies which don’t fundamentally depend on open source for their products Donate to open source if you have a decent enough salary Don’t discard unionizing (I am writing this in Finland, where 65% of all workers are in unions) Don’t discard alternative licenses for new projects Pressure Microsoft to donate millions to open source projects Expose the truth on how companies are behaving by publishing data studies like this one]]></summary></entry><entry><title type="html">The year tech giants peaked – 2018, a retrospective</title><link href="https://staltz.com/the-year-tech-giants-peaked-2018-a-retrospective.html" rel="alternate" type="text/html" title="The year tech giants peaked – 2018, a retrospective" /><published>2018-12-27T00:00:00+02:00</published><updated>2018-12-27T00:00:00+02:00</updated><id>https://staltz.com/the-year-tech-giants-peaked-2018-a-retrospective</id><content type="html" xml:base="https://staltz.com/the-year-tech-giants-peaked-2018-a-retrospective.html"><![CDATA[<p>2018 marked history as the year when governments made tech giants responsible for election interference and tumult in democracy. Centralization and decentralization were central themes in cyberspace this year, while regulation and freedom also defined the rhetoric of many actors. We understood how closely cyberspace and meatspace affect each other, demonstrated by a couple of key events in 2018.</p>

<h2 id="fb-and-goog-peaked">FB and GOOG peaked</h2>

<p><a href="/img/2018-tech-peaked-zuck-cameras.jpg"><img src="/img/2018-tech-peaked-zuck-cameras.jpg" alt="Mark Zuckerberg being photographed during congressional hearings in April" /></a></p>

<p>(Source: Reuters)</p>

<p>When the <a href="https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html">Cambridge Analytica data scandal</a> was published on March 17th, FB stock took a big hit. In April, Zuckerberg appeared before U.S. Congress to testify, leaving answers that many felt were unsatisfactory. A few months later, as their <a href="https://investor.fb.com/financials/?section=secfilings">Q2 earnings report</a> demonstrated little to no growth, many investors immediately sold their stock, leading to 20% drop in price ($120 billion in value). Many seemed to realize that FB is not doing well, neither as a business, nor as a platform for humane discourse.</p>

<p>As a result, FB stock prices this year went 2 years back in time, back to prices similar to early 2017. GOOG stock prices had a similar performance, as GOOG also received some negative press related to YouTube’s role in election interference, as well as <a href="https://www.businessinsider.in/F-you-leakers-A-former-senior-Google-employee-says-a-frantic-quest-to-stop-internal-info-getting-out-is-now-managements-number-one-priority/articleshow/67000790.cms">leaks</a> that revealed a deteriorating company cohesion, <a href="https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html">lower commitment to ethics</a>, and <a href="https://www.nytimes.com/2018/11/01/technology/google-walkout-sexual-harassment.html">protestable handling</a> of sexual harassment incidents. Other Silicon Valley companies shared similar troubles. In 2017, Uber was a <a href="https://www.recode.net/2017/8/20/16164176/uber-2017-timeline-scandal">source of company culture scandals</a>, but in 2018 <a href="https://www.bloomberg.com/news/articles/2018-11-14/uber-revenue-slows-as-quarterly-loss-surges-to-1-1-billion">its business showed signs of slowing</a>.</p>

<p><a href="/img/2018-tech-peaked-fbstock.png"><img src="/img/2018-tech-peaked-fbstock.png" alt="FB stock performance during 2018" /></a></p>

<p>(Source: <a href="https://www.marketwatch.com/investing/stock/fb">MarketWatch</a>)</p>

<p>The overall sentiment from the population has shifted: in a <a href="https://www.axios.com/america-sours-on-social-media-giants-1542234046-c48fb55b-48d6-4c96-9ea9-a36e80ab5deb.html">American poll of opinion</a>, more people believe that social media hurts democracy more than it helps. The public perception of tech giants has gotten worse. In 2017, <a href="https://www.nytimes.com/2017/01/31/business/delete-uber.html">#DeleteUber</a> was a trending hashtag. In 2018, it was <a href="https://nordic.businessinsider.com/deletefacebook-facebook-movement-2018-3">#DeleteFacebook</a>’s turn, causing <a href="https://www.cnbc.com/2018/09/05/facebook-exodus-44-percent-of-americans-age-18-29-have-deleted-app.html">more than 40% of young adults in the U.S. to delete the app</a> from their phones. Even the co-founder of WhatsApp (a FB subsidiary) <a href="https://twitter.com/brianacton/status/976231995846963201?s=19">Brian Acton used the same hashtag</a> to express his views, after quitting his position at FB. Other directors, such as Instagram’s (another FB subsidiary) co-founders, had a more <a href="https://www.theverge.com/2018/9/24/17899208/instagram-cofounders-resign-facebook-kevin-systrom-mike-krieger">graceful exit</a> from FB. But the consistent message coming from many <a href="https://www.theguardian.com/technology/2017/dec/11/facebook-former-executive-ripping-society-apart">former executives</a>, <a href="https://www.theguardian.com/technology/2017/nov/09/facebook-sean-parker-vulnerability-brain-psychology">presidents</a> and <a href="https://www.theguardian.com/technology/2018/jan/13/mark-zuckerberg-tech-addiction-investors-speak-up">investors</a> is that FB has crossed the line, becoming psychologically harmful and destroying how society works.</p>

<p>The common themes for tech giants in 2018 were <strong>evaporation of reputation</strong>, and a <strong>decline in business</strong>. Their reputation got attacked internally and externally, from multiple angles and stories. This year, many discovered that these companies are darker than we thought. I am an outspoken critic of FB for years, but if you told me earlier this year that <a href="https://www.nytimes.com/2018/11/15/technology/facebook-definers-opposition-research.html">FB would hire <em>Definers Public Affairs</em></a> to lift negative stories on FB-critical senators, I would have dismissed that as an exagerated prediction. But this happened, and we’re all learning how dark these tech giants actually are, thanks to internal leaks and adversarial journalism.</p>

<p>On the business side, it seems like these giants have saturated their products. FB’s Facebook growth has stalled in the USA, probably because more than 70% of Americans are regular (monthly active) Facebook users. GOOG focused on maintaining its current Search and YouTube ad revenue, while attempting to grow its AI efforts and Cloud business, where it still has not guaranteed leadership. GOOG is actually <a href="https://www.bloomberg.com/news/articles/2018-11-13/google-may-have-to-get-used-to-third-place-in-the-cloud">lagging behind when it comes to the Cloud</a>, and its AI products are promising but are not yet reliable revenue streams. In 2018, GOOG also <em>continued to discontinue</em> many of its non-key products, such as <a href="https://en.wikipedia.org/wiki/Inbox_by_Gmail">Inbox</a>, <a href="https://en.wikipedia.org/wiki/Google%2B#Shutdown_of_consumer_version">Google+ for consumers</a>, and <a href="https://support.google.com/fusiontables/answer/9185417">Fusion Tables</a>, which harms its credibility as a <em>reliable</em> provider of services, important as a Cloud business.</p>

<h2 id="less-social-networks">Less social networks</h2>

<p><a href="/img/2018-tech-peaked-tumblr.jpeg"><img src="/img/2018-tech-peaked-tumblr.jpeg" alt="Painting depicting Tumblr exodus of users" /></a></p>

<p>(Unknown source, please <a href="mailto:contact@staltz.com">contact me</a> if you know who the author is)</p>

<p>The <a href="https://www.blog.google/technology/safety-security/project-strobe/">sunsetting of Google+ for consumers</a> is an important marker for the web in 2018, because it consolidates FB’s dominance in social networks. It’s not the first time that GOOG discontinues a large social network, <a href="https://en.wikipedia.org/wiki/Orkut">Orkut</a> was once a social network with dozens of millions of active users. Ironically, one of the reasons GOOG discontinued Orkut was the prospect of Google+ and its potential to replace Orkut.</p>

<p>The problem with discontinuing small platforms (yet multi-million user large!) is that it removes consumer choice when the tide changes. Orkut was very popular among Brazilians, but began losing space to Facebook in 2009. However, now that Facebook’s credibility is decreasing, users have no choice of going back to a previous social network. This lack of platform competition is due to such platforms being proprietary.</p>

<p>It is easy for one company to acquire and assimilate another social network. Companies do that because by joining platforms together they acquire more power and efficiency. It is also easier for companies to discontinue a platform, and they do that when the costs of running the platform don’t justify the small gains. Therefore, among proprietary for-profit social platforms, only the large and merged platforms tend to survive. Hence, Facebook. However, had the platform been a non-commercial open protocol (such as the Web or Email), its availability would be much more reliable and independent from any company’s seasonal performance, likely surviving for many decades.</p>

<p>Another 2018 story on social networks was <a href="https://www.theverge.com/2018/12/3/18123752/tumblr-adult-content-porn-ban-date-explicit-changes-why-safe-mode">the content crackdown that Tumblr imposed</a> on adult content, including artistic communities, that forced many users away from its platform. For many users it spelled the end of that social network. As a proprietary platform, Tumblr was first acquired by Yahoo, which in turn was acquired by Verizon in 2017. This means Tumblr is subject to the same instability and uncertainty that is inherent of a platform hanging on the decisions of a few business executives legally entitled to steer the platform.</p>

<h2 id="regulation">Regulation</h2>

<p>In the European Union, 2018 was the year when GDPR was switched on, requiring deep changes to a huge proportion of sites and internet services run by organizations around the world. For many sites targeting national (e.g. American) audiences, such as <a href="http://www.tribpub.com/gdpr/chicagotribune.com/">Chicago Tribune</a>, GDPR compliance was not worth it, so these sites became unavailable to European readers. While GDPR may have sparsely helped in its original goal to increase user control over personal data, it had a role in furthering the <a href="https://en.wikipedia.org/wiki/Splinternet">balkanization of the internet</a>, already norm in China.</p>

<p><a href="/img/2018-tech-peaked-gdpr-chicago.png"><img src="/img/2018-tech-peaked-gdpr-chicago.png" alt="Chicago Tribune frontpage blocked to EU users" /></a></p>

<p>(Source: <a href="http://www.tribpub.com/gdpr/chicagotribune.com/">Chicago Tribune</a>)</p>

<p>The EU is aiming a lot of new regulation at tech giants and the internet at large. GDPR was not the only one, as Europeans are already familiar with the <a href="http://ec.europa.eu/ipg/basics/legal/cookies/index_en.htm">infamous cookie banner</a> for years. While most of regulation is intended to limit corporate exploitation and protect user freedom, some new proposals, such as the <a href="https://en.wikipedia.org/wiki/Directive_on_Copyright_in_the_Digital_Single_Market#Article_11">Link Tax and Upload Filters</a> may significantly harm openness and freedom on the internet. In September 2018, unfortunately, the <a href="https://juliareda.eu/2018/09/ep-endorses-upload-filters/">EU Parliament decided to proceed</a> with the proposal.</p>

<p>Like GDPR triggered suddenly for many unprepared organizations, so may new legislation continue to add obstacles to internet traffic and rich information exchange globally. Many of these organizations may not find it cost-beneficial to serve European users anymore.</p>

<h2 id="ny-fb-cyberwar">NY-FB cyberwar</h2>

<p>In 2018 the press and FB became enemies. Last year I blogged about how the <a href="https://staltz.com/the-web-began-dying-in-2014-heres-how.html">Web began dying in 2014</a> and it has to do with traffic sources to news sites. In the recent years, the online press became more dependent on FB and GOOG for the <em>vast majority</em> of their traffic, putting professional journalism at the mercy of these giant platforms. Moreover, large news sites often were in competition with lower-tier sensationalist fabricated articles spreading quickly on platforms like Facebook. Naturally, the press got upset.</p>

<p>The New York Times this year had a storm of articles published specifically on FB scandals, too many for me to quote (<a href="https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html">[1]</a> <a href="https://www.nytimes.com/2018/11/14/technology/facebook-data-russia-election-racism.html">[2]</a> <a href="https://www.nytimes.com/2018/11/15/technology/facebook-definers-opposition-research.html">[3]</a> etc). Similar in tone, the <a href="https://www.newyorker.com/news/daily-comment/facebook-and-the-age-of-manipulation">New Yorker</a> also had incisive stories to report.</p>

<p>In Menlo Park, CA, Zuckerberg started the year by pledging to “<a href="https://www.facebook.com/zuck/posts/10104380170714571">fix Facebook</a>”. Months later, after the turbulence caused by the Guardian’s and NYT’s articles about Cambridge Analytica, Zuckerberg adopted a <a href="https://www.wsj.com/articles/with-facebook-at-war-zuckerberg-adopts-more-aggressive-style-1542577980">war attitude</a> internally at FB.</p>

<p>The tense exchange between FB and NY press is logical, they are competing in the same market: attention and advertisement. And while FB <a href="https://www.theguardian.com/technology/2018/jul/02/facebook-mark-zuckerberg-platform-publisher-lawsuit">denies that it is a publisher</a>, what matters is that both FB and the press monetize eyeballs, and their competition for attention is a zero-sum game.</p>

<h2 id="amzn--msft--aapl-soared">AMZN / MSFT / AAPL soared</h2>

<p>Meanwhile, the other tech giants had an easier year, not occupying the negative spotlights in the press, not having to answer frustrated senators in Congress, and increasing their market capitalization. While FB and GOOG finished the year with -29% and -7% (respectively) stock value compared to the beginning of the year, AMZN and MSFT stock prices went up 17% and 14% (respectively). Jeff Bezos became the richest person on Earth in July. AAPL became the world’s first trillion dollar company, marking history. Soon after, though, AAPL’s value declined and MSFT was able to pass it and acquire the title of most valuable U.S. company.</p>

<p>Because these tech giants are not competing in advertisement and are not surveillance capitalism companies, they had a much easier year in terms of publicity. While none of these companies are paragons of user freedom and privacy, people tend to place the blame mostly on FB and GOOG for psychological damage in social media addiction, misinformation, and political social engineering.</p>

<h2 id="blockchain-winter">Blockchain winter</h2>

<p>Cryptocurrencies peaked in value a few weeks before 2018 began, and after 12 months, all of the top cryptocurrencies saw a 70%+ drop in value. You could easily say the Bitcoin bubble burst, as many were already expecting in 2017.</p>

<p>ICOs in 2018 raised <a href="https://www.coinschedule.com/stats.html?year=2018">3 times more money</a> than <a href="https://www.coinschedule.com/stats.html?year=2017">in 2017</a>, but 2018 was not a great year for ICOs. Many people noticed a large amount of scam ICOs, and in the U.S.A. the government began the process of <a href="https://techcrunch.com/2018/03/01/the-sec-is-reportedly-investigating-a-number-of-icos/">regulating ICOs and often classifying them as securities</a>. The number of ICOs has been steadily declining since mid 2018. Overall, 2018 was a difficult year for cryptocurrencies, but there are plenty of respectable active projects to compensate for the scams, and even though cryptocurrencies entered the mainstream and casual discourse, we are still talking about an industry in its infancy.</p>

<p>Cryptocurrencies will become center of attention again when the global stock market enters a recession and people look for alternative stores of value. At the very end of 2018, we may be already seeing this occuring, as stocks in America have <a href="https://www.marketwatch.com/story/peter-schiff-says-were-not-in-a-bear-market-were-in-a-house-of-cards-that-the-fed-built-2018-12-19">entered a bear market around December 21st</a>, and simultaneously major cryptocurrencies like Bitcoin and Ethereum have risen noticeably.</p>

<h2 id="peer-to-peer-grassroots">Peer-to-peer grassroots</h2>

<p>When the topic is <em>decentralized technologies</em>, cryptocurrencies usually take the spotlight, but one ramification of decentralization are the non-blockchain peer-to-peer (P2P) projects, also known as the “Decentralized Web” (DWeb), such as <a href="https://ipfs.io/">IPFS</a>, <a href="https://datproject.org/">Dat</a>, <a href="https://www.scuttlebutt.nz/">SSB</a>, <a href="https://zeronet.io/">ZeroNet</a>, <a href="https://holochain.org/">Holochain</a>, <a href="https://solid.inrupt.com/">Solid</a>, <a href="https://webtorrent.io/">WebTorrent</a>, <a href="https://matrix.org/blog/home/">Matrix</a>, <a href="https://safenetwork.tech/">SAFE</a>, <a href="https://gun.eco/">GUN</a>, <a href="https://althea.org/">Althea</a>, etc. These have had a great year, although they are small in scale. One could say these P2P projects were behaving in 2018 like cryptocurrencies were in 2015-2016: not occupying mainstream discourse, but still promising, in active use, diverse, and thriving.</p>

<p>The highlight of this movement was the <a href="https://decentralizedweb.net/">Decentralized Web Summit</a> that took place in San Francisco in August. Young pioneers and industry veterans were equally excited about the tangible innovation and opportunities ahead. Vint Cerf, inventor of the TCP/IP protocol, <a href="https://twitter.com/kentbye/status/1024691013997146112">called it a historical summit</a>. Tim-Berners Lee, creator of the WWW, was also present.</p>

<p>On January 5th this year, as fellow developer André Garzia was working on Firefox extension for <a href="https://scuttlebutt.nz">SSB</a> called Patchfox, he wanted to add support for the <code class="language-plaintext highlighter-rouge">ssb://</code> protocol in addresses, and sent a commit to Firefox to permit a few more decentralized protocols. I call this the <a href="https://hg.mozilla.org/mozilla-central/rev/c2cb8a06bcf1">DWeb big bang commit</a>, it was a small effort but brought these protocols to public attention, as the commit ended up in the <a href="https://blog.mozilla.org/addons/2018/01/26/extensions-firefox-59/">Firefox Update changelog</a>, also leading to <a href="https://www.theinquirer.net/inquirer/news/3025478/firefox-59-will-support-decentralised-internet-protocols">articles reporting about the change</a>.</p>

<p>Months later, <a href="https://hacks.mozilla.org/2018/07/introducing-the-d-web/">Mozilla Hack’s blog featured a series of articles on DWeb protocols</a>, Mozilla gave an <a href="https://twitter.com/dat_project/status/1037420103929872384">open source grant to the Dat project</a>, and began the <a href="https://twitter.com/ummjackson/status/1008092555693576193">libdweb experiment</a> as a set of new browser capabilities useful for DWeb protocols, enabling <a href="https://twitter.com/substack/status/1065630162686013440">TCP servers in the browser</a> and even <a href="https://twitter.com/_alanshaw/status/1030059189534633985">full IPFS nodes in the browser</a>. Other browsers started catching up with Firefox: <a href="https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/29sFh4tTdcs/K4XroilVBAAJ">Chromium began discussion to permit new decentralized protocols</a> following Firefox’s example. There is now a website where you can check how well do major browsers support the DWeb: <a href="https://arewedistributedyet.com">arewedistributedyet.com</a>. I have to say, <a href="https://twitter.com/andrestaltz/status/957226778166218752">in January I predicted this would happen</a>.</p>

<p>That said, the one browser that took the DWeb spotlight was undoubtedly <a href="https://beakerbrowser.com/">Beaker Browser</a>, which this year had a redesign, <a href="https://twitter.com/kriesse/status/1003267024649379840">several conference talks</a>, and a <a href="https://explore.beakerbrowser.com/">thriving community of creators</a>, which is indicative of a successful project. Beaker’s success so far has been making authoring and publishing a first-class experience of the web, not as data submitted to a server, but as actual HTML sites authored from scratch. It’s a love letter to the web’s original design: “the creation of new links and new material by readers, [so that] <a href="https://web.archive.org/web/20110717031115/http://info.cern.ch/NextBrowser.html">authorship becomes universal</a>”.</p>

<p>On the social side of the web, <a href="https://scuttlebutt.nz">Secure Scuttlebutt (SSB)</a>’s community grew to over 10k accounts and <a href="https://twitter.com/andrestaltz/status/1067499207966294016">100k connections</a>, <a href="https://manyver.se">Manyverse</a> was launched as the first SSB mobile app (by yours truly), and the wider community <a href="https://opencollective.com/access">received significant funding</a> from <a href="https://handshake.org/">Handshake</a>.</p>

<p>In other projects, significant advancements happened in 2018 like <a href="http://oscoin.io/radicle.html">OSCoin’s release of Radicle</a>, and <a href="https://medium.com/h-o-l-o/holos-org-wide-progress-in-2018-40c71361bedf">the rise of Holochain</a>. There’s too many news to fit in this article, but the bottom line is that decentralized web projects are now beyond experiments, they are working hard towards maturity and beginning to develop end-user apps.</p>

<p>These projects have also appeared on the radar of tech companies, also literally, as <a href="https://www.thoughtworks.com/radar/platforms">ThoughtWorks marked IPFS as an ‘assess’ item in its tech radar</a>. IPFS also took headlines when <a href="https://twitter.com/Cloudflare/status/1041674183946764288">Cloudfare decided to setup a IPFS gateway</a>, providing a web-accessible endpoint to content hosted throughout IPFS nodes. Another company aware of the DWeb is Samsung, which announced this year the <a href="https://samsungnext.com/whats-next/introducing-the-samsung-next-stack-zero-grant/">Samsung NEXT Stack Zero Grant</a> specifically to the peer-to-peer web and decentralized projects. The DWeb also got mentioned in higher ranks, when Rep. David Cicilline was questioning Google CEO Sundar Pichai in Congress and <a href="https://youtu.be/zIniYSkAWo0?t=7680">said</a> “<em>Along with 83% of americans, I strongly support an open decentralized internet that is free of powerful gatekeepers</em>”, echoing Tim-Berners Lee’s articles on re-decentralizing the web.</p>

<h2 id="2019">2019?</h2>

<p>To the best of my estimates, I can give some predictions for 2019 or early 2020.</p>

<p>There will be a global economical recession. We are certainly in the latter stages of an optimistic period, and with political instability, US-China trade wars, and tech giant underperformance, the economy is fragile for any event that will tip it towards pessimism. This will affect tech giants directly, because they are all publicly traded companies, and one could say we’re already seeing the beginnings of recession since October 2018.</p>

<p>FB will devaluate steadily throughout 2019. It might spike up in reaction to some good decisions, but the tendency will be downwards, because the overall economy will be difficult, and because of FB’s own issues. The sentiment around Facebook.com will also continue to decay, but keep an eye on Instagram and WhatsApp. Even if all Facebook.com users join the #DeleteFacebook movement, they are much less likely to delete their WhatsApp and Instagram apps, and many don’t even know that FB owns all these products. FB knows this, and they will defend the business and user experience in those apps. Supposing Facebook.com dies (I don’t think it will), WhatsApp and Instagram can be FB’s second chance of getting it right. Maybe Zuckerberg apologized for Facebook so much because that is where all the mistakes were committed, maybe their next platforms will be better. There are talks that FB should be broken apart, one company for each of these products, but FB will do everything (also lobbying) to avoid a breakup of their social monopoly.</p>

<p>Overall, although FB constantly occupies scandalous news headlines, we are underestimating FB. No other platform has made 90%+ of (non-China) internet users their monthly active users (<a href="https://investor.fb.com/financials/?section=secfilings">2.6 billion users</a> of FB products, divided by <a href="https://en.wikipedia.org/wiki/Global_Internet_usage">3.5 billion non-China internet users</a>), which means if any two random persons want to be in contact over the internet, the easiest way is very likely to be through FB products. That is of immense value and does not die out quickly, and it is hard to compare the hypothetical sudden death of Facebook to other sudden deaths of other internet platforms, because literally no other internet platform has yet been as large as Facebook. I myself am blocking both GOOG and FB services from my computer for 2+ years, and I recognize that it is vastly easier to stop using GOOG services than it is to be outside FB products. I am constantly reminded that I am excluded from a lot of social activities, and I know that my choice also causes an uncomfortable social burden on others. My search engine choice does not cause that same effect.</p>

<p>New York’s press war with FB has most likely caused people to be <em>aware</em> of the ethic underperformance of FB executives, but this is not much different to discovering that your country’s politicians and leaders are corrupt. It’s enraging, it’s protestable, and maybe if you try hard enough with enough mass coordination you can make a change, but it’s still a centralized authority so much more powerful than you are, that it leaves you feeling powerless to make a difference or even change your habits. This is just one more symptom that FB is a Net State: it has a huge population (userbase), citizen identity (login/account), constitution (content moderation rules), government (Zuckerberg and FB company), and now even other states (US government, EU, UK parliament, etc) are engaging with it and some of its citizens are protesting against it. FB as a Net State is also actively developing its Police capabilities, largely in reaction to the election interference scandals. However, this is not exclusively because of election interference, it would anyway be an inevitable next step for a Net State. See Zuckerberg’s post “<a href="https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/">A Blueprint for Content Governance and Enforcement</a>”. Let me highlight one word in that title to make it very obvious: Enforcement.</p>

<p>However, FB is also underestimating us. More than any other organization, FB understands memetics, a field of study on the human drive to imitate others and the viral spread of information and behavior. I remember watching Zuckerberg testify to Congress and being asked about the #DeleteFacebook movement, where he answered that it didn’t have a significant impact on the numbers. I knew at that moment that he was lying or purposefully underplaying that effect. Everything FB does, from press releases, to designing UIs and copying competitors is about memetic engineering. They know that no one really can stop a viral movement once it is fully unleashed, and they fear that the same mechanics that unlocked their exponential growth would cause their exponential evaporation. They fear not any single entity, they fear the power of <em>many</em>.</p>

<p>GOOG in 2019 will get increasingly more boring, maintaining the basic leader position in Search, YouTube, Android, and Docs. They will keep on pushing for AI strongly, since that’s their core mission, but it’s unclear whether we will see significant AI breakthroughs in 2019 and 2020. Maybe something interesting, but not on the same revolutionary scale as iPhone in 2007. Not yet. They might also keep their tradition of discontinuing products, we might see one or two more discontinued GOOG products in 2019. Their cloud business is at risk of being (gradually) discontinued, unless perhaps by specializing in AI as a service.</p>

<p>Keep an eye on AMZN, AAPL, MSFT, particularly if they can build strong AI competence. While GOOG and FB still take most of the top AI talent, they also receive a lot of negative press and their employees may be on the verge of quitting, while AI is a long-term battle. With immense budget, AMZN, AAPL, and MSFT are actually good incubators for AI technology.</p>

<p>Cryptocurrencies will have a good 2019. Most likely not an exponential kind of growth, maybe just linear or super-linear growth. While many have considered cryptocurrencies defeated after the bubble burst in late 2017, cryptocurrencies behave financially very differently than startups, growth companies, or commodities. These are open source and permissionless databases, which mean they don’t die easily. Growth companies have budget runways, commodities are subject to specific kinds of supply and demand. But open source and open data do not die, not until everyone has lost interest. Cryptocurrencies most likely have many winters, and experience the hype cycle multiple times, each plateau of productivity blending quickly with the next peak of inflated expectations.</p>

<p>Regulation of both tech giants and cryptocurrencies will tighten in 2019. Governments have barely woken up to the power these two cyberforces have on real society, and since they take months and years to reactively regulate, 2019 will show regulation meant for a world from 2016, both from the EU and the USA. A first step might be the USA copying parts of EU’s GDPR.</p>

<p>Regarding DWeb projects, in 2019 a few (maybe one or two) larger organizations may make experimental use of decentralized protocols such as IPFS, Holochain, SSB, Dat. These might be in absolute numbers small advancements, but still significant to scale up these projects by <em>one order of magnitude</em>, which is considerably bigger, but tech giants are still 4 orders of magnitude bigger. Funding will be a challenge for these projects, and maybe in 2019 a few such projects might lose momentum due to lack of resources. Another challenge will be the UX Gap.</p>

<p>Currently, internet users are served top-notch user experience from centralized services, but are dissatisfied with the governance underperformance and freedom problems of such services. DWeb projects are technically mature to solve the governance issues, but cannot provide an easy-to-switch user experience on par with centralized services. For further funding, DWeb projects also require an app with great UX to showcase the power of the technical foundation. They will need, essentially, a <em>Mastodon Effect</em>. The success of <a href="https://joinmastodon.org/">Mastodon</a> is primarily a UX success, it fits users’ expectations, looks aestetically pleasing, and <em>just works</em>, but the protocol foundation is somewhat lacking compared to modern P2P protocols, particularly in distributing power and authority.</p>

<p>More conferences will happen on the topic of DWeb, not just the Decentralized Web Summit, which means the community around these projects will grow. In 2019 we might see one, two, or three very interesting end-user apps that fill the UX Gap and are able to start thriving communities. I am excited to be part of this future, and I hope it becomes a movement larger than any individual involved. I wish you a good start for 2019, too!</p>]]></content><author><name>André Staltz</name></author><category term="blog" /><summary type="html"><![CDATA[2018 marked history as the year when governments made tech giants responsible for election interference and tumult in democracy. Centralization and decentralization were central themes in cyberspace this year, while regulation and freedom also defined the rhetoric of many actors. We understood how closely cyberspace and meatspace affect each other, demonstrated by a couple of key events in 2018. FB and GOOG peaked (Source: Reuters) When the Cambridge Analytica data scandal was published on March 17th, FB stock took a big hit. In April, Zuckerberg appeared before U.S. Congress to testify, leaving answers that many felt were unsatisfactory. A few months later, as their Q2 earnings report demonstrated little to no growth, many investors immediately sold their stock, leading to 20% drop in price ($120 billion in value). Many seemed to realize that FB is not doing well, neither as a business, nor as a platform for humane discourse. As a result, FB stock prices this year went 2 years back in time, back to prices similar to early 2017. GOOG stock prices had a similar performance, as GOOG also received some negative press related to YouTube’s role in election interference, as well as leaks that revealed a deteriorating company cohesion, lower commitment to ethics, and protestable handling of sexual harassment incidents. Other Silicon Valley companies shared similar troubles. In 2017, Uber was a source of company culture scandals, but in 2018 its business showed signs of slowing. (Source: MarketWatch) The overall sentiment from the population has shifted: in a American poll of opinion, more people believe that social media hurts democracy more than it helps. The public perception of tech giants has gotten worse. In 2017, #DeleteUber was a trending hashtag. In 2018, it was #DeleteFacebook’s turn, causing more than 40% of young adults in the U.S. to delete the app from their phones. Even the co-founder of WhatsApp (a FB subsidiary) Brian Acton used the same hashtag to express his views, after quitting his position at FB. Other directors, such as Instagram’s (another FB subsidiary) co-founders, had a more graceful exit from FB. But the consistent message coming from many former executives, presidents and investors is that FB has crossed the line, becoming psychologically harmful and destroying how society works. The common themes for tech giants in 2018 were evaporation of reputation, and a decline in business. Their reputation got attacked internally and externally, from multiple angles and stories. This year, many discovered that these companies are darker than we thought. I am an outspoken critic of FB for years, but if you told me earlier this year that FB would hire Definers Public Affairs to lift negative stories on FB-critical senators, I would have dismissed that as an exagerated prediction. But this happened, and we’re all learning how dark these tech giants actually are, thanks to internal leaks and adversarial journalism. On the business side, it seems like these giants have saturated their products. FB’s Facebook growth has stalled in the USA, probably because more than 70% of Americans are regular (monthly active) Facebook users. GOOG focused on maintaining its current Search and YouTube ad revenue, while attempting to grow its AI efforts and Cloud business, where it still has not guaranteed leadership. GOOG is actually lagging behind when it comes to the Cloud, and its AI products are promising but are not yet reliable revenue streams. In 2018, GOOG also continued to discontinue many of its non-key products, such as Inbox, Google+ for consumers, and Fusion Tables, which harms its credibility as a reliable provider of services, important as a Cloud business. Less social networks (Unknown source, please contact me if you know who the author is) The sunsetting of Google+ for consumers is an important marker for the web in 2018, because it consolidates FB’s dominance in social networks. It’s not the first time that GOOG discontinues a large social network, Orkut was once a social network with dozens of millions of active users. Ironically, one of the reasons GOOG discontinued Orkut was the prospect of Google+ and its potential to replace Orkut. The problem with discontinuing small platforms (yet multi-million user large!) is that it removes consumer choice when the tide changes. Orkut was very popular among Brazilians, but began losing space to Facebook in 2009. However, now that Facebook’s credibility is decreasing, users have no choice of going back to a previous social network. This lack of platform competition is due to such platforms being proprietary. It is easy for one company to acquire and assimilate another social network. Companies do that because by joining platforms together they acquire more power and efficiency. It is also easier for companies to discontinue a platform, and they do that when the costs of running the platform don’t justify the small gains. Therefore, among proprietary for-profit social platforms, only the large and merged platforms tend to survive. Hence, Facebook. However, had the platform been a non-commercial open protocol (such as the Web or Email), its availability would be much more reliable and independent from any company’s seasonal performance, likely surviving for many decades. Another 2018 story on social networks was the content crackdown that Tumblr imposed on adult content, including artistic communities, that forced many users away from its platform. For many users it spelled the end of that social network. As a proprietary platform, Tumblr was first acquired by Yahoo, which in turn was acquired by Verizon in 2017. This means Tumblr is subject to the same instability and uncertainty that is inherent of a platform hanging on the decisions of a few business executives legally entitled to steer the platform. Regulation In the European Union, 2018 was the year when GDPR was switched on, requiring deep changes to a huge proportion of sites and internet services run by organizations around the world. For many sites targeting national (e.g. American) audiences, such as Chicago Tribune, GDPR compliance was not worth it, so these sites became unavailable to European readers. While GDPR may have sparsely helped in its original goal to increase user control over personal data, it had a role in furthering the balkanization of the internet, already norm in China. (Source: Chicago Tribune) The EU is aiming a lot of new regulation at tech giants and the internet at large. GDPR was not the only one, as Europeans are already familiar with the infamous cookie banner for years. While most of regulation is intended to limit corporate exploitation and protect user freedom, some new proposals, such as the Link Tax and Upload Filters may significantly harm openness and freedom on the internet. In September 2018, unfortunately, the EU Parliament decided to proceed with the proposal. Like GDPR triggered suddenly for many unprepared organizations, so may new legislation continue to add obstacles to internet traffic and rich information exchange globally. Many of these organizations may not find it cost-beneficial to serve European users anymore. NY-FB cyberwar In 2018 the press and FB became enemies. Last year I blogged about how the Web began dying in 2014 and it has to do with traffic sources to news sites. In the recent years, the online press became more dependent on FB and GOOG for the vast majority of their traffic, putting professional journalism at the mercy of these giant platforms. Moreover, large news sites often were in competition with lower-tier sensationalist fabricated articles spreading quickly on platforms like Facebook. Naturally, the press got upset. The New York Times this year had a storm of articles published specifically on FB scandals, too many for me to quote ([1] [2] [3] etc). Similar in tone, the New Yorker also had incisive stories to report. In Menlo Park, CA, Zuckerberg started the year by pledging to “fix Facebook”. Months later, after the turbulence caused by the Guardian’s and NYT’s articles about Cambridge Analytica, Zuckerberg adopted a war attitude internally at FB. The tense exchange between FB and NY press is logical, they are competing in the same market: attention and advertisement. And while FB denies that it is a publisher, what matters is that both FB and the press monetize eyeballs, and their competition for attention is a zero-sum game. AMZN / MSFT / AAPL soared Meanwhile, the other tech giants had an easier year, not occupying the negative spotlights in the press, not having to answer frustrated senators in Congress, and increasing their market capitalization. While FB and GOOG finished the year with -29% and -7% (respectively) stock value compared to the beginning of the year, AMZN and MSFT stock prices went up 17% and 14% (respectively). Jeff Bezos became the richest person on Earth in July. AAPL became the world’s first trillion dollar company, marking history. Soon after, though, AAPL’s value declined and MSFT was able to pass it and acquire the title of most valuable U.S. company. Because these tech giants are not competing in advertisement and are not surveillance capitalism companies, they had a much easier year in terms of publicity. While none of these companies are paragons of user freedom and privacy, people tend to place the blame mostly on FB and GOOG for psychological damage in social media addiction, misinformation, and political social engineering. Blockchain winter Cryptocurrencies peaked in value a few weeks before 2018 began, and after 12 months, all of the top cryptocurrencies saw a 70%+ drop in value. You could easily say the Bitcoin bubble burst, as many were already expecting in 2017. ICOs in 2018 raised 3 times more money than in 2017, but 2018 was not a great year for ICOs. Many people noticed a large amount of scam ICOs, and in the U.S.A. the government began the process of regulating ICOs and often classifying them as securities. The number of ICOs has been steadily declining since mid 2018. Overall, 2018 was a difficult year for cryptocurrencies, but there are plenty of respectable active projects to compensate for the scams, and even though cryptocurrencies entered the mainstream and casual discourse, we are still talking about an industry in its infancy. Cryptocurrencies will become center of attention again when the global stock market enters a recession and people look for alternative stores of value. At the very end of 2018, we may be already seeing this occuring, as stocks in America have entered a bear market around December 21st, and simultaneously major cryptocurrencies like Bitcoin and Ethereum have risen noticeably. Peer-to-peer grassroots When the topic is decentralized technologies, cryptocurrencies usually take the spotlight, but one ramification of decentralization are the non-blockchain peer-to-peer (P2P) projects, also known as the “Decentralized Web” (DWeb), such as IPFS, Dat, SSB, ZeroNet, Holochain, Solid, WebTorrent, Matrix, SAFE, GUN, Althea, etc. These have had a great year, although they are small in scale. One could say these P2P projects were behaving in 2018 like cryptocurrencies were in 2015-2016: not occupying mainstream discourse, but still promising, in active use, diverse, and thriving. The highlight of this movement was the Decentralized Web Summit that took place in San Francisco in August. Young pioneers and industry veterans were equally excited about the tangible innovation and opportunities ahead. Vint Cerf, inventor of the TCP/IP protocol, called it a historical summit. Tim-Berners Lee, creator of the WWW, was also present. On January 5th this year, as fellow developer André Garzia was working on Firefox extension for SSB called Patchfox, he wanted to add support for the ssb:// protocol in addresses, and sent a commit to Firefox to permit a few more decentralized protocols. I call this the DWeb big bang commit, it was a small effort but brought these protocols to public attention, as the commit ended up in the Firefox Update changelog, also leading to articles reporting about the change. Months later, Mozilla Hack’s blog featured a series of articles on DWeb protocols, Mozilla gave an open source grant to the Dat project, and began the libdweb experiment as a set of new browser capabilities useful for DWeb protocols, enabling TCP servers in the browser and even full IPFS nodes in the browser. Other browsers started catching up with Firefox: Chromium began discussion to permit new decentralized protocols following Firefox’s example. There is now a website where you can check how well do major browsers support the DWeb: arewedistributedyet.com. I have to say, in January I predicted this would happen. That said, the one browser that took the DWeb spotlight was undoubtedly Beaker Browser, which this year had a redesign, several conference talks, and a thriving community of creators, which is indicative of a successful project. Beaker’s success so far has been making authoring and publishing a first-class experience of the web, not as data submitted to a server, but as actual HTML sites authored from scratch. It’s a love letter to the web’s original design: “the creation of new links and new material by readers, [so that] authorship becomes universal”. On the social side of the web, Secure Scuttlebutt (SSB)’s community grew to over 10k accounts and 100k connections, Manyverse was launched as the first SSB mobile app (by yours truly), and the wider community received significant funding from Handshake. In other projects, significant advancements happened in 2018 like OSCoin’s release of Radicle, and the rise of Holochain. There’s too many news to fit in this article, but the bottom line is that decentralized web projects are now beyond experiments, they are working hard towards maturity and beginning to develop end-user apps. These projects have also appeared on the radar of tech companies, also literally, as ThoughtWorks marked IPFS as an ‘assess’ item in its tech radar. IPFS also took headlines when Cloudfare decided to setup a IPFS gateway, providing a web-accessible endpoint to content hosted throughout IPFS nodes. Another company aware of the DWeb is Samsung, which announced this year the Samsung NEXT Stack Zero Grant specifically to the peer-to-peer web and decentralized projects. The DWeb also got mentioned in higher ranks, when Rep. David Cicilline was questioning Google CEO Sundar Pichai in Congress and said “Along with 83% of americans, I strongly support an open decentralized internet that is free of powerful gatekeepers”, echoing Tim-Berners Lee’s articles on re-decentralizing the web. 2019? To the best of my estimates, I can give some predictions for 2019 or early 2020. There will be a global economical recession. We are certainly in the latter stages of an optimistic period, and with political instability, US-China trade wars, and tech giant underperformance, the economy is fragile for any event that will tip it towards pessimism. This will affect tech giants directly, because they are all publicly traded companies, and one could say we’re already seeing the beginnings of recession since October 2018. FB will devaluate steadily throughout 2019. It might spike up in reaction to some good decisions, but the tendency will be downwards, because the overall economy will be difficult, and because of FB’s own issues. The sentiment around Facebook.com will also continue to decay, but keep an eye on Instagram and WhatsApp. Even if all Facebook.com users join the #DeleteFacebook movement, they are much less likely to delete their WhatsApp and Instagram apps, and many don’t even know that FB owns all these products. FB knows this, and they will defend the business and user experience in those apps. Supposing Facebook.com dies (I don’t think it will), WhatsApp and Instagram can be FB’s second chance of getting it right. Maybe Zuckerberg apologized for Facebook so much because that is where all the mistakes were committed, maybe their next platforms will be better. There are talks that FB should be broken apart, one company for each of these products, but FB will do everything (also lobbying) to avoid a breakup of their social monopoly. Overall, although FB constantly occupies scandalous news headlines, we are underestimating FB. No other platform has made 90%+ of (non-China) internet users their monthly active users (2.6 billion users of FB products, divided by 3.5 billion non-China internet users), which means if any two random persons want to be in contact over the internet, the easiest way is very likely to be through FB products. That is of immense value and does not die out quickly, and it is hard to compare the hypothetical sudden death of Facebook to other sudden deaths of other internet platforms, because literally no other internet platform has yet been as large as Facebook. I myself am blocking both GOOG and FB services from my computer for 2+ years, and I recognize that it is vastly easier to stop using GOOG services than it is to be outside FB products. I am constantly reminded that I am excluded from a lot of social activities, and I know that my choice also causes an uncomfortable social burden on others. My search engine choice does not cause that same effect. New York’s press war with FB has most likely caused people to be aware of the ethic underperformance of FB executives, but this is not much different to discovering that your country’s politicians and leaders are corrupt. It’s enraging, it’s protestable, and maybe if you try hard enough with enough mass coordination you can make a change, but it’s still a centralized authority so much more powerful than you are, that it leaves you feeling powerless to make a difference or even change your habits. This is just one more symptom that FB is a Net State: it has a huge population (userbase), citizen identity (login/account), constitution (content moderation rules), government (Zuckerberg and FB company), and now even other states (US government, EU, UK parliament, etc) are engaging with it and some of its citizens are protesting against it. FB as a Net State is also actively developing its Police capabilities, largely in reaction to the election interference scandals. However, this is not exclusively because of election interference, it would anyway be an inevitable next step for a Net State. See Zuckerberg’s post “A Blueprint for Content Governance and Enforcement”. Let me highlight one word in that title to make it very obvious: Enforcement. However, FB is also underestimating us. More than any other organization, FB understands memetics, a field of study on the human drive to imitate others and the viral spread of information and behavior. I remember watching Zuckerberg testify to Congress and being asked about the #DeleteFacebook movement, where he answered that it didn’t have a significant impact on the numbers. I knew at that moment that he was lying or purposefully underplaying that effect. Everything FB does, from press releases, to designing UIs and copying competitors is about memetic engineering. They know that no one really can stop a viral movement once it is fully unleashed, and they fear that the same mechanics that unlocked their exponential growth would cause their exponential evaporation. They fear not any single entity, they fear the power of many. GOOG in 2019 will get increasingly more boring, maintaining the basic leader position in Search, YouTube, Android, and Docs. They will keep on pushing for AI strongly, since that’s their core mission, but it’s unclear whether we will see significant AI breakthroughs in 2019 and 2020. Maybe something interesting, but not on the same revolutionary scale as iPhone in 2007. Not yet. They might also keep their tradition of discontinuing products, we might see one or two more discontinued GOOG products in 2019. Their cloud business is at risk of being (gradually) discontinued, unless perhaps by specializing in AI as a service. Keep an eye on AMZN, AAPL, MSFT, particularly if they can build strong AI competence. While GOOG and FB still take most of the top AI talent, they also receive a lot of negative press and their employees may be on the verge of quitting, while AI is a long-term battle. With immense budget, AMZN, AAPL, and MSFT are actually good incubators for AI technology. Cryptocurrencies will have a good 2019. Most likely not an exponential kind of growth, maybe just linear or super-linear growth. While many have considered cryptocurrencies defeated after the bubble burst in late 2017, cryptocurrencies behave financially very differently than startups, growth companies, or commodities. These are open source and permissionless databases, which mean they don’t die easily. Growth companies have budget runways, commodities are subject to specific kinds of supply and demand. But open source and open data do not die, not until everyone has lost interest. Cryptocurrencies most likely have many winters, and experience the hype cycle multiple times, each plateau of productivity blending quickly with the next peak of inflated expectations. Regulation of both tech giants and cryptocurrencies will tighten in 2019. Governments have barely woken up to the power these two cyberforces have on real society, and since they take months and years to reactively regulate, 2019 will show regulation meant for a world from 2016, both from the EU and the USA. A first step might be the USA copying parts of EU’s GDPR. Regarding DWeb projects, in 2019 a few (maybe one or two) larger organizations may make experimental use of decentralized protocols such as IPFS, Holochain, SSB, Dat. These might be in absolute numbers small advancements, but still significant to scale up these projects by one order of magnitude, which is considerably bigger, but tech giants are still 4 orders of magnitude bigger. Funding will be a challenge for these projects, and maybe in 2019 a few such projects might lose momentum due to lack of resources. Another challenge will be the UX Gap. Currently, internet users are served top-notch user experience from centralized services, but are dissatisfied with the governance underperformance and freedom problems of such services. DWeb projects are technically mature to solve the governance issues, but cannot provide an easy-to-switch user experience on par with centralized services. For further funding, DWeb projects also require an app with great UX to showcase the power of the technical foundation. They will need, essentially, a Mastodon Effect. The success of Mastodon is primarily a UX success, it fits users’ expectations, looks aestetically pleasing, and just works, but the protocol foundation is somewhat lacking compared to modern P2P protocols, particularly in distributing power and authority. More conferences will happen on the topic of DWeb, not just the Decentralized Web Summit, which means the community around these projects will grow. In 2019 we might see one, two, or three very interesting end-user apps that fill the UX Gap and are able to start thriving communities. I am excited to be part of this future, and I hope it becomes a movement larger than any individual involved. I wish you a good start for 2019, too!]]></summary></entry><entry><title type="html">JavaScript Getter-Setter Pyramid</title><link href="https://staltz.com/javascript-getter-setter-pyramid.html" rel="alternate" type="text/html" title="JavaScript Getter-Setter Pyramid" /><published>2018-12-18T00:00:00+02:00</published><updated>2018-12-18T00:00:00+02:00</updated><id>https://staltz.com/javascript-getter-setter-pyramid</id><content type="html" xml:base="https://staltz.com/javascript-getter-setter-pyramid.html"><![CDATA[<p>The cornerstone of JavaScript is the function. It is a flexible abstraction that works as the basis for other abstractions, such as Promises, Iterables, Observables, and others. I have been teaching these concepts in conferences and workshops, and over time I have found an elegant summary of these abstractions, layed out in a pyramid. In this blog post I’ll provide a tour through these layers in the pyramid.</p>

<h2 id="functions" class="hr"><span class="hr">FUNCTIONS</span></h2>

<h3 style="text-align:center"><code>X =&gt; Y</code></h3>

<p>&lt;?xml version=”1.0” encoding=”UTF-8” standalone=”no”?&gt;</p>
<svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" width="100%" height="200" viewBox="0 0 158.74999 50" version="1.1">
  <g transform="translate(0,-250.58345)">
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 15.393816,254.60231 H 143.99905 l 4.90616,15.59154 H 10.491946 Z" id="function-bg" />
    <text xml:space="preserve" x="67.759735" y="264.60794" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Function</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 10.104156,271.46572 H 149.2976 l 4.90616,15.59154 H 5.2022871 Z" id="values-bg" />
    <text xml:space="preserve" x="72" y="281.44595" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26;opacity:0.6"><tspan>Value</tspan></text>
  </g>
</svg>

<p>The very base of JavaScript are the first-class values such as numbers, strings, objects, booleans, etc. Although you could still write a program that uses just values and control flow, very soon you would need to write a function to improve your program.</p>

<p>Functions are unavoidable abstractions in JavaScript, they are often required for async I/O via callbacks. The word “function” in JavaScript does not refer to “pure functions” like in functional programming. It’s better to understand these as simply “procedures”, because they are just lazy reusable chunks of code, with optional input (the arguments), and optional output (the return).</p>

<p>Compared to hard coded chunks of code, functions provide a couple important benefits:</p>

<ul>
  <li>Laziness / reusability
    <ul>
      <li>The code inside a function must be lazy (i.e. not executed unless called) for it to be reusable</li>
    </ul>
  </li>
  <li>Implementation flexibility
    <ul>
      <li>Consumers of the function don’t care how the function is internally implemented, so this means there is flexibility to implement the function in various ways</li>
    </ul>
  </li>
</ul>

<h2 id="getters" class="hr"><span class="hr">GETTERS</span></h2>

<h3 style="text-align:center"><code>() =&gt; X</code></h3>
<h4 style="text-align:center">A getter is a function with no input arguments and X as output</h4>

<p>&lt;?xml version=”1.0” encoding=”UTF-8” standalone=”no”?&gt;</p>
<svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" width="100%" height="280" viewBox="0 0 158.74999 70" version="1.1">
  <g transform="translate(0,-225)">
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 15.393816,254.60231 H 143.99905 l 4.90616,15.59154 H 10.491946 Z" id="function-bg" />
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 20.703206,237.7389 h 46.26576 l -10e-6,15.59154 h -51.16762 z" id="getter-bg" />
    <text xml:space="preserve" x="34.422062" y="247.7571" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter</tspan></text>
    <text xml:space="preserve" x="67.759735" y="264.60794" style="font-style:normal;opacity:0.6;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Function</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 10.104156,271.46572 H 149.2976 l 4.90616,15.59154 H 5.2022871 Z" id="values-bg" />
    <text xml:space="preserve" x="72" y="281.44595" style="font-style:normal;opacity:0.6;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Value</tspan></text>
  </g>
</svg>

<p>Getters are one kind of function, where no arguments are passed but a return value is expected. There are many such getters in the JavaScript runtime, such as <code class="language-plaintext highlighter-rouge">Math.random()</code>, <code class="language-plaintext highlighter-rouge">Date.now()</code>, and others. Getters are also useful as abstractions for values. Compare <code class="language-plaintext highlighter-rouge">user</code> with <code class="language-plaintext highlighter-rouge">getUser</code> below:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">const</span> <span class="nx">user</span> <span class="o">=</span> <span class="p">{</span><span class="na">name</span><span class="p">:</span> <span class="dl">'</span><span class="s1">Alice</span><span class="dl">'</span><span class="p">,</span> <span class="na">age</span><span class="p">:</span> <span class="mi">30</span><span class="p">};</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">user</span><span class="p">.</span><span class="nx">name</span><span class="p">);</span> <span class="c1">// Alice</span>


<span class="kd">function</span> <span class="nx">getUser</span><span class="p">()</span> <span class="p">{</span>
  <span class="k">return</span> <span class="p">{</span><span class="na">name</span><span class="p">:</span> <span class="dl">'</span><span class="s1">Alice</span><span class="dl">'</span><span class="p">,</span> <span class="na">age</span><span class="p">:</span> <span class="mi">30</span><span class="p">};</span>
<span class="p">}</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getUser</span><span class="p">().</span><span class="nx">name</span><span class="p">);</span> <span class="c1">// Alice</span>
</code></pre></div></div>

<p>By using a getter to represent a value, we inherit the benefits of functions, such as laziness: if we don’t call <code class="language-plaintext highlighter-rouge">getUser()</code>, then the user object will not be created in vain.</p>

<p>We also gain implementation flexibility, because we can calculate the return object in multiple different ways, either by creating a plain object, or by returning an instance of a class, or by using properties on the prototype, etc. With hard-coded values we wouldn’t have this flexibility.</p>

<p>Getters also allow us to have a hook for side effects. Whenever the getter is executed we can trigger a useful side effect, like a <code class="language-plaintext highlighter-rouge">console.log</code> or the triggering of an Analytics event, for instance:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">getUser</span><span class="p">()</span> <span class="p">{</span>
  <span class="nx">Analytics</span><span class="p">.</span><span class="nx">sendEvent</span><span class="p">(</span><span class="dl">'</span><span class="s1">User object is now being accessed</span><span class="dl">'</span><span class="p">);</span>
  <span class="k">return</span> <span class="p">{</span><span class="na">name</span><span class="p">:</span> <span class="dl">'</span><span class="s1">Alice</span><span class="dl">'</span><span class="p">,</span> <span class="na">age</span><span class="p">:</span> <span class="mi">30</span><span class="p">};</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Computations on getters can also be abstract, because functions can be passed around as first-class value in JavaScript. For instance, consider this addition function which takes getters as arguments, and returns a getter of a number, not a number directly:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">add</span><span class="p">(</span><span class="nx">getX</span><span class="p">,</span> <span class="nx">getY</span><span class="p">)</span> <span class="p">{</span>
  <span class="k">return</span> <span class="kd">function</span> <span class="nx">getZ</span><span class="p">()</span> <span class="p">{</span>
    <span class="kd">const</span> <span class="nx">x</span> <span class="o">=</span> <span class="nx">getX</span><span class="p">();</span>
    <span class="kd">const</span> <span class="nx">y</span> <span class="o">=</span> <span class="nx">getY</span><span class="p">();</span>
    <span class="k">return</span> <span class="nx">x</span> <span class="o">+</span> <span class="nx">y</span><span class="p">;</span>
  <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>The benefit of such abstract computation is clearer when the getters return unpredictable values, such as adding with the getter <code class="language-plaintext highlighter-rouge">Math.random</code>:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">const</span> <span class="nx">getTen</span> <span class="o">=</span> <span class="p">()</span> <span class="o">=&gt;</span> <span class="mi">10</span><span class="p">;</span>
<span class="kd">const</span> <span class="nx">getTenPlusRandom</span> <span class="o">=</span> <span class="nx">add</span><span class="p">(</span><span class="nx">getTen</span><span class="p">,</span> <span class="nb">Math</span><span class="p">.</span><span class="nx">random</span><span class="p">);</span>

<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getTenPlusRandom</span><span class="p">());</span> <span class="c1">// 10.948117215055046</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getTenPlusRandom</span><span class="p">());</span> <span class="c1">// 10.796721274448556</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getTenPlusRandom</span><span class="p">());</span> <span class="c1">// 10.15350303918338</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getTenPlusRandom</span><span class="p">());</span> <span class="c1">// 10.829703269933633</span>
</code></pre></div></div>

<p>It’s also common to see getters being used with Promises, since Promises are known to not be reusable computations, so that wrapping a Promise constructor in a getter (also known as “factory” or “thunk”) makes it reusable.</p>

<h2 id="setters" class="hr"><span class="hr">SETTERS</span></h2>

<h3 style="text-align:center"><code>X =&gt; ()</code></h3>
<h4 style="text-align:center">A setter is a function with X as input and no output</h4>

<p>&lt;?xml version=”1.0” encoding=”UTF-8” standalone=”no”?&gt;</p>
<svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" width="100%" height="280" viewBox="0 0 158.74999 70" version="1.1">
  <g transform="translate(0,-225)">
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 15.393816,254.60231 H 143.99905 l 4.90616,15.59154 H 10.491946 Z" id="function-bg" />
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 20.703206,237.7389 h 46.26576 l -10e-6,15.59154 h -51.16762 z" id="getter-bg" />
    <text xml:space="preserve" x="34.422062" y="247.7571" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter</tspan></text>
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.118406,237.73893 h 70.603814 l 4.90616,15.59154 H 68.117526 Z" id="setter-bg" />
    <text xml:space="preserve" x="96.157547" y="247.75713" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter</tspan></text>
    <text xml:space="preserve" x="67.759735" y="264.60794" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Function</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 10.104156,271.46572 H 149.2976 l 4.90616,15.59154 H 5.2022871 Z" id="values-bg" />
    <text xml:space="preserve" x="72" y="281.44595" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Value</tspan></text>
  </g>
</svg>

<p>Setters are one kind of function, where an argument is provided, but no output value is returned. There are many setters natively in the JavaScript runtime and in the DOM, such as <code class="language-plaintext highlighter-rouge">console.log(x)</code>, <code class="language-plaintext highlighter-rouge">document.write(x)</code>, and others.</p>

<p>Unlike getters, setters are often not abstractions, because if no value comes out of the function, it means the function is only meant for sending data or commanding the JavaScript runtime. For instance, while the getter <code class="language-plaintext highlighter-rouge">getTen</code> is an abstraction for the number ten and we can pass that getter around as a value, it does not make sense to pass the function <code class="language-plaintext highlighter-rouge">setTen</code> around as a value, because you will not be able to <em>retrieve</em> any number by calling it.</p>

<p>That said, setters can be simple wrappers of other setters. Consider this wrapper for the <code class="language-plaintext highlighter-rouge">console.log</code> setter:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">fancyConsoleLog</span><span class="p">(</span><span class="nx">str</span><span class="p">)</span> <span class="p">{</span>
  <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">⭐ </span><span class="dl">'</span> <span class="o">+</span> <span class="nx">str</span> <span class="o">+</span> <span class="dl">'</span><span class="s1"> ⭐</span><span class="dl">'</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div></div>

<h2 id="getter-getters" class="hr"><span class="hr">GETTER GETTERS</span></h2>

<h3 style="text-align:center"><code>() =&gt; (() =&gt; X)</code></h3>
<h4 style="text-align:center">A getter-getter is a function with no input arguments and a getter as output</h4>

<p>&lt;?xml version=”1.0” encoding=”UTF-8” standalone=”no”?&gt;</p>
<svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" width="100%" height="310" viewBox="0 0 158.74999 60" version="1.1">
  <g transform="translate(0,-225)">
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 15.393816,254.60231 H 143.99905 l 4.90616,15.59154 H 10.491946 Z" id="function-bg" />
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 26.011436,220.87593 h 40.95695 l -10e-6,15.59154 h -45.85881 z" id="gettergetter-bg" />
    <text xml:space="preserve" x="28" y="230.8" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter-getter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 20.703206,237.7389 h 46.26576 l -10e-6,15.59154 h -51.16762 z" id="getter-bg" />
    <text xml:space="preserve" x="34.422062" y="247.7571" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.118406,237.73893 h 70.603814 l 4.90616,15.59154 H 68.117526 Z" id="setter-bg" />
    <text xml:space="preserve" x="96.157547" y="247.75713" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter</tspan></text>
    <text xml:space="preserve" x="67.759735" y="264.60794" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Function</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 10.104156,271.46572 H 149.2976 l 4.90616,15.59154 H 5.2022871 Z" id="values-bg" />
    <text xml:space="preserve" x="72" y="281.44595" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Value</tspan></text>
  </g>
</svg>

<p>A special type of a getter is one that returns another getter, so it’s a “getter of getters”. The need for getter-getters arises from using getters to iterate over sequences. For instance, if we want to show the sequence of numbers that are a power of two, we could use the getter <code class="language-plaintext highlighter-rouge">getNextPowerOfTwo()</code>:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">let</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">2</span><span class="p">;</span>
<span class="kd">function</span> <span class="nx">getNextPowerOfTwo</span><span class="p">()</span> <span class="p">{</span>
  <span class="kd">const</span> <span class="nx">next</span> <span class="o">=</span> <span class="nx">i</span><span class="p">;</span>
  <span class="nx">i</span> <span class="o">=</span> <span class="nx">i</span> <span class="o">*</span> <span class="mi">2</span><span class="p">;</span>
  <span class="k">return</span> <span class="nx">next</span><span class="p">;</span>
<span class="p">}</span>

<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNextPowerOfTwo</span><span class="p">());</span> <span class="c1">// 2</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNextPowerOfTwo</span><span class="p">());</span> <span class="c1">// 4</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNextPowerOfTwo</span><span class="p">());</span> <span class="c1">// 8</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNextPowerOfTwo</span><span class="p">());</span> <span class="c1">// 16</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNextPowerOfTwo</span><span class="p">());</span> <span class="c1">// 32</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNextPowerOfTwo</span><span class="p">());</span> <span class="c1">// 64</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNextPowerOfTwo</span><span class="p">());</span> <span class="c1">// 128</span>
</code></pre></div></div>

<p>The problem with the code above is that the variable <code class="language-plaintext highlighter-rouge">i</code> is declared globally, and if we would want to restart the sequence, we would have to manipulate this variable in the correct way, leaking implementation details of the getter.</p>

<p>What needs to be done to make the code above reusable and free of globals is to wrap the getter in another function. And this wrapper function is also a getter.</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">getGetNext</span><span class="p">()</span> <span class="p">{</span>
  <span class="kd">let</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">2</span><span class="p">;</span>
  <span class="k">return</span> <span class="kd">function</span> <span class="nx">getNext</span><span class="p">()</span> <span class="p">{</span>
    <span class="kd">const</span> <span class="nx">next</span> <span class="o">=</span> <span class="nx">i</span><span class="p">;</span>
    <span class="nx">i</span> <span class="o">=</span> <span class="nx">i</span> <span class="o">*</span> <span class="mi">2</span><span class="p">;</span>
    <span class="k">return</span> <span class="nx">next</span><span class="p">;</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="kd">let</span> <span class="nx">getNext</span> <span class="o">=</span> <span class="nx">getGetNext</span><span class="p">();</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNext</span><span class="p">());</span> <span class="c1">// 2</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNext</span><span class="p">());</span> <span class="c1">// 4</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNext</span><span class="p">());</span> <span class="c1">// 8</span>
<span class="nx">getNext</span> <span class="o">=</span> <span class="nx">getGetNext</span><span class="p">();</span> <span class="c1">// 🔷 restart!</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNext</span><span class="p">());</span> <span class="c1">// 2</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNext</span><span class="p">());</span> <span class="c1">// 4</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNext</span><span class="p">());</span> <span class="c1">// 8</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNext</span><span class="p">());</span> <span class="c1">// 16</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getNext</span><span class="p">());</span> <span class="c1">// 32</span>
</code></pre></div></div>

<p>Because getter-getters are just a special type of getter, they inherit all the benefits of getters, such as: (1) implementation flexibility, (2) hook for side effects, (3) laziness. The laziness this time is reflected in the initialization step. The outer function enables lazy initialization, while the inner function enables lazy iteration of values:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">getGetNext</span><span class="p">()</span> <span class="p">{</span>
  <span class="c1">// 🔷 LAZY INITIALIZATION</span>
  <span class="kd">let</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">2</span><span class="p">;</span>

  <span class="k">return</span> <span class="kd">function</span> <span class="nx">getNext</span><span class="p">()</span> <span class="p">{</span>
    <span class="c1">// 🔷 LAZY ITERATION</span>
    <span class="kd">const</span> <span class="nx">next</span> <span class="o">=</span> <span class="nx">i</span><span class="p">;</span>
    <span class="nx">i</span> <span class="o">=</span> <span class="nx">i</span> <span class="o">*</span> <span class="mi">2</span><span class="p">;</span>
    <span class="k">return</span> <span class="nx">next</span><span class="p">;</span>
  <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<h2 id="setter-setters" class="hr"><span class="hr">SETTER SETTERS</span></h2>

<h3 style="text-align:center"><code>(X =&gt; ()) =&gt; ()</code></h3>
<h4 style="text-align:center">A setter-setter is a function with a setter as input and no output</h4>

<p>&lt;?xml version=”1.0” encoding=”UTF-8” standalone=”no”?&gt;</p>
<svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" width="100%" height="310" viewBox="0 0 158.74999 60" version="1.1">
  <g transform="translate(0,-225)">
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 15.393816,254.60231 H 143.99905 l 4.90616,15.59154 H 10.491946 Z" id="function-bg" />
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 26.011436,220.87593 h 40.95695 l -10e-6,15.59154 h -45.85881 z" id="gettergetter-bg" />
    <text xml:space="preserve" x="28" y="230.8" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter-getter</tspan></text>
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.131956,220.87593 h 65.297924 l 4.90616,15.59154 H 68.131086 Z" id="settersetter-bg" />
    <text xml:space="preserve" x="85.400581" y="230.89413" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter-setter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 20.703206,237.7389 h 46.26576 l -10e-6,15.59154 h -51.16762 z" id="getter-bg" />
    <text xml:space="preserve" x="34.422062" y="247.7571" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.118406,237.73893 h 70.603814 l 4.90616,15.59154 H 68.117526 Z" id="setter-bg" />
    <text xml:space="preserve" x="96.157547" y="247.75713" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter</tspan></text>
    <text xml:space="preserve" x="67.759735" y="264.60794" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Function</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 10.104156,271.46572 H 149.2976 l 4.90616,15.59154 H 5.2022871 Z" id="values-bg" />
    <text xml:space="preserve" x="72" y="281.44595" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Value</tspan></text>
  </g>
</svg>

<p>A setter-setter is a particular kind of setter functions, where the argument passed is also a setter.
While basic setters are not abstractions, setter-setters are abstractions capable of representing values that can be passed around the codebase.</p>

<p>For instance, consider how it’s possible to represent the number ten through this setter-setter:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">setSetTen</span><span class="p">(</span><span class="nx">setTen</span><span class="p">)</span> <span class="p">{</span>
  <span class="nx">setTen</span><span class="p">(</span><span class="mi">10</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Notice the lack of a return, because setters never return. The example above might be more readable by simply renaming some arguments:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">setTenListener</span><span class="p">(</span><span class="nx">cb</span><span class="p">)</span> <span class="p">{</span>
  <span class="nx">cb</span><span class="p">(</span><span class="mi">10</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div></div>

<p>As the name indicates, <code class="language-plaintext highlighter-rouge">cb</code> stands for “callback”, and illustrates how setter-setters are common in JavaScript, given an abundant amount of use cases for callbacks. Consuming the abstract value represented by a setter-setter in the opposite way you would consume a getter. These two examples below are functionally equivalent, but have very different call styles.</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nx">setSetTen</span><span class="p">(</span><span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">);</span>

<span class="c1">// compare with...</span>

<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">getTen</span><span class="p">());</span>
</code></pre></div></div>

<p>The benefits of setter-setters are the same as with getters – laziness, implementation flexibility, hook for side effects – but with two new properties that getters don’t have: inversion of control and asynchronicity.</p>

<p>In the example above, the code that uses the getter dictates when the getter is consumed with <code class="language-plaintext highlighter-rouge">console.log</code>. However, when using a setter-setter, it’s the setter-setter itself which dictates when to call <code class="language-plaintext highlighter-rouge">console.log</code>. This inversion of responsibility allows the setter-setter to have more power than getters, for instance by sending many values to consuming code:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">setSetTen</span><span class="p">(</span><span class="nx">setTen</span><span class="p">)</span> <span class="p">{</span>
  <span class="nx">setTen</span><span class="p">(</span><span class="mi">10</span><span class="p">)</span>
  <span class="nx">setTen</span><span class="p">(</span><span class="mi">10</span><span class="p">)</span>
  <span class="nx">setTen</span><span class="p">(</span><span class="mi">10</span><span class="p">)</span>
  <span class="nx">setTen</span><span class="p">(</span><span class="mi">10</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Inversion of control also allows the setter-setter to decide <em>when</em> to deliver a value to the callback, for example asynchronously. Recall that another name for <code class="language-plaintext highlighter-rouge">setSetTen</code> could be <code class="language-plaintext highlighter-rouge">setTenListener</code>:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">setTenListener</span><span class="p">(</span><span class="nx">cb</span><span class="p">)</span> <span class="p">{</span>
  <span class="nx">setTimeout</span><span class="p">(()</span> <span class="o">=&gt;</span> <span class="p">{</span> <span class="nx">cb</span><span class="p">(</span><span class="mi">10</span><span class="p">);</span> <span class="p">},</span> <span class="mi">1000</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div></div>

<p>While setter-setters are common in JavaScript for asynchronous programming, callback-driven code is not necessarily asynchronous. In the <code class="language-plaintext highlighter-rouge">setSetTen</code> example below, it is as synchronous as a getter is:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">setSetTen</span><span class="p">(</span><span class="nx">setTen</span><span class="p">)</span> <span class="p">{</span>
  <span class="nx">setTen</span><span class="p">(</span><span class="mi">10</span><span class="p">)</span>
<span class="p">}</span>

<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">before</span><span class="dl">'</span><span class="p">);</span>
<span class="nx">setSetTen</span><span class="p">(</span><span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">);</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">after</span><span class="dl">'</span><span class="p">);</span>

<span class="c1">// (Log shows:)</span>
<span class="c1">// before</span>
<span class="c1">// 10</span>
<span class="c1">// after</span>
</code></pre></div></div>

<h2 id="iterables" class="hr"><span class="hr">ITERABLES</span></h2>

<h3 style="text-align:center"><code>() =&gt; (() =&gt; ({done, value}))</code></h3>
<h4 style="text-align:center">An iterable is (with some details omitted:)<br />a getter-getter of an object that describes either a value or completion</h4>

<p>&lt;?xml version=”1.0” encoding=”UTF-8” standalone=”no”?&gt;</p>
<svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" width="100%" height="400" viewBox="0 0 158.74999 57" version="1.1">
  <g transform="translate(0,-220)">
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 15.393816,254.60231 H 143.99905 l 4.90616,15.59154 H 10.491946 Z" id="function-bg" />
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 31.306206,204.01193 h 35.60154 l -1e-5,15.59154 h -40.5034 z" id="iterable-bg" />
    <text xml:space="preserve" x="37.299671" y="213.99203" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Iterable</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 26.011436,220.87593 h 40.95695 l -10e-6,15.59154 h -45.85881 z" id="gettergetter-bg" />
    <text xml:space="preserve" x="28" y="230.8" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter-getter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.131956,220.87593 h 65.297924 l 4.90616,15.59154 H 68.131086 Z" id="settersetter-bg" />
    <text xml:space="preserve" x="85.400581" y="230.89413" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter-setter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 20.703206,237.7389 h 46.26576 l -10e-6,15.59154 h -51.16762 z" id="getter-bg" />
    <text xml:space="preserve" x="34.422062" y="247.7571" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.118406,237.73893 h 70.603814 l 4.90616,15.59154 H 68.117526 Z" id="setter-bg" />
    <text xml:space="preserve" x="96.157547" y="247.75713" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter</tspan></text>
    <text xml:space="preserve" x="67.759735" y="264.60794" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Function</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 10.104156,271.46572 H 149.2976 l 4.90616,15.59154 H 5.2022871 Z" id="values-bg" />
    <text xml:space="preserve" x="72" y="281.44595" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Value</tspan></text>
  </g>
</svg>

<p>Getter-getters are capable of representing restartable sequences of values, but they have no convention to signal the <em>end</em> of a sequence. Iterables are a particular kind of getter-getter where the value is always an object with two properties: <code class="language-plaintext highlighter-rouge">done</code> (boolean indicating completion), and <code class="language-plaintext highlighter-rouge">value</code> (the actual delivered value unless <code class="language-plaintext highlighter-rouge">done</code> is true).</p>

<p>The completion indicator allows the code that consumes an iterable to know that subsequent Gets will return invalid data, so the consuming code can know when to stop iterating.</p>

<p>In the example below, we can produce a <em>finite</em> getter-getter of even numbers in the range 40 to 48, by respecting the completion indicator:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">getGetNext</span><span class="p">()</span> <span class="p">{</span>
  <span class="kd">let</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">40</span><span class="p">;</span>
  <span class="k">return</span> <span class="kd">function</span> <span class="nx">getNext</span><span class="p">()</span> <span class="p">{</span>
    <span class="k">if</span> <span class="p">(</span><span class="nx">i</span> <span class="o">&lt;=</span> <span class="mi">48</span><span class="p">)</span> <span class="p">{</span>
      <span class="kd">const</span> <span class="nx">next</span> <span class="o">=</span> <span class="nx">i</span><span class="p">;</span>
      <span class="nx">i</span> <span class="o">+=</span> <span class="mi">2</span><span class="p">;</span>
      <span class="k">return</span> <span class="p">{</span><span class="na">done</span><span class="p">:</span> <span class="kc">false</span><span class="p">,</span> <span class="na">value</span><span class="p">:</span> <span class="nx">next</span><span class="p">};</span>
    <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
      <span class="k">return</span> <span class="p">{</span><span class="na">done</span><span class="p">:</span> <span class="kc">true</span><span class="p">};</span>
    <span class="p">}</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="kd">let</span> <span class="nx">getNext</span> <span class="o">=</span> <span class="nx">getGetNext</span><span class="p">();</span>
<span class="k">for</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">result</span> <span class="o">=</span> <span class="nx">getNext</span><span class="p">();</span> <span class="o">!</span><span class="nx">result</span><span class="p">.</span><span class="nx">done</span><span class="p">;</span> <span class="nx">result</span> <span class="o">=</span> <span class="nx">getNext</span><span class="p">())</span> <span class="p">{</span>
  <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">result</span><span class="p">.</span><span class="nx">value</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div></div>

<p><em>ES6 Iterables</em> have further conventions beyond the simple <code class="language-plaintext highlighter-rouge">() =&gt; (() =&gt; ({done, value}))</code> pattern, they add a wrapper object on each getter:</p>

<ul>
  <li>The outer getter <code class="language-plaintext highlighter-rouge">f</code> becomes the object <code class="language-plaintext highlighter-rouge">{[Symbol.iterator]: f}</code></li>
  <li>The inner getter <code class="language-plaintext highlighter-rouge">g</code> becomes the object <code class="language-plaintext highlighter-rouge">{next: g}</code></li>
</ul>

<p>Here is the code that matches the previous example, but as a valid ES6 Iterable:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">const</span> <span class="nx">oddNums</span> <span class="o">=</span> <span class="p">{</span>
  <span class="p">[</span><span class="nb">Symbol</span><span class="p">.</span><span class="nx">iterator</span><span class="p">]:</span> <span class="p">()</span> <span class="o">=&gt;</span> <span class="p">{</span>
    <span class="kd">let</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">40</span><span class="p">;</span>
    <span class="k">return</span> <span class="p">{</span>
      <span class="na">next</span><span class="p">:</span> <span class="p">()</span> <span class="o">=&gt;</span> <span class="p">{</span>
        <span class="k">if</span> <span class="p">(</span><span class="nx">i</span> <span class="o">&lt;=</span> <span class="mi">48</span><span class="p">)</span> <span class="p">{</span>
          <span class="kd">const</span> <span class="nx">next</span> <span class="o">=</span> <span class="nx">i</span><span class="p">;</span>
          <span class="nx">i</span> <span class="o">+=</span> <span class="mi">2</span><span class="p">;</span>
          <span class="k">return</span> <span class="p">{</span><span class="na">done</span><span class="p">:</span> <span class="kc">false</span><span class="p">,</span> <span class="na">value</span><span class="p">:</span> <span class="nx">next</span><span class="p">};</span>
        <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
          <span class="k">return</span> <span class="p">{</span><span class="na">done</span><span class="p">:</span> <span class="kc">true</span><span class="p">};</span>
        <span class="p">}</span>
      <span class="p">}</span>
    <span class="p">}</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="kd">let</span> <span class="nx">iterator</span> <span class="o">=</span> <span class="nx">oddNums</span><span class="p">[</span><span class="nb">Symbol</span><span class="p">.</span><span class="nx">iterator</span><span class="p">]();</span>
<span class="k">for</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">result</span> <span class="o">=</span> <span class="nx">iterator</span><span class="p">.</span><span class="nx">next</span><span class="p">();</span> <span class="o">!</span><span class="nx">result</span><span class="p">.</span><span class="nx">done</span><span class="p">;</span> <span class="nx">result</span> <span class="o">=</span> <span class="nx">iterator</span><span class="p">.</span><span class="nx">next</span><span class="p">())</span> <span class="p">{</span>
  <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">result</span><span class="p">.</span><span class="nx">value</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Notice the difference between those examples:</p>

<div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gd">-function getGetNext() {
</span><span class="gi">+const oddNums = {
+  [Symbol.iterator]: () =&gt; {
</span>     let i = 40;
<span class="gd">-  return function getNext() {
</span><span class="gi">+    return {
+      next: () =&gt; {
</span>         if (i &lt;= 48) {
           const next = i;
           i += 2;
           return {done: false, value: next};
         } else {
           return {done: true};
         }
       }
<span class="gi">+    }
</span>   }
<span class="gi">+}
</span>
-let getNext = getGetNext();
<span class="gd">-for (let result = getNext(); !result.done; result = getNext()) {
</span><span class="gi">+let iterator = oddNums[Symbol.iterator]();
+for (let result = iterator.next(); !result.done; result = iterator.next()) {
</span>  console.log(result.value);
<span class="err">}</span>
</code></pre></div></div>

<p>ES6 provides the syntax sugar <code class="language-plaintext highlighter-rouge">for-let-of</code> to consume Iterables in a convenient way:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">for</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">x</span> <span class="k">of</span> <span class="nx">oddNums</span><span class="p">)</span> <span class="p">{</span>
  <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">x</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div></div>

<p>For easily creating Iterables, ES6 also provides the generator function syntax sugar <code class="language-plaintext highlighter-rouge">function*</code>:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span><span class="o">*</span> <span class="nx">oddNums</span><span class="p">()</span> <span class="p">{</span>
  <span class="kd">let</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">40</span><span class="p">;</span>
  <span class="k">while</span> <span class="p">(</span><span class="kc">true</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">if</span> <span class="p">(</span><span class="nx">i</span> <span class="o">&lt;=</span> <span class="mi">48</span><span class="p">)</span> <span class="p">{</span>
      <span class="kd">const</span> <span class="nx">next</span> <span class="o">=</span> <span class="nx">i</span><span class="p">;</span>
      <span class="nx">i</span> <span class="o">+=</span> <span class="mi">2</span><span class="p">;</span>
      <span class="k">yield</span> <span class="nx">next</span><span class="p">;</span>
    <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
      <span class="k">return</span><span class="p">;</span>
    <span class="p">}</span>
  <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>With <strong>production-side syntax sugar</strong> and <strong>consumption-side syntax sugar</strong>, iterables are easy-to-use abstractions for completable sequences of values in JavaScript since 2015. Note that <em>calling a generator function</em> will return an Iterable, the generator function itself is not an Iterable:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span><span class="o">*</span> <span class="nx">oddNums</span><span class="p">()</span> <span class="p">{</span>
  <span class="kd">let</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">40</span><span class="p">;</span>
  <span class="k">while</span> <span class="p">(</span><span class="kc">true</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">if</span> <span class="p">(</span><span class="nx">i</span> <span class="o">&lt;=</span> <span class="mi">48</span><span class="p">)</span> <span class="p">{</span>
      <span class="k">yield</span> <span class="nx">i</span><span class="p">;</span>
      <span class="nx">i</span> <span class="o">+=</span> <span class="mi">2</span><span class="p">;</span>
    <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
      <span class="k">return</span><span class="p">;</span>
    <span class="p">}</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="k">for</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">x</span> <span class="k">of</span> <span class="nx">oddNums</span><span class="p">())</span> <span class="p">{</span>
  <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">x</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div></div>

<h2 id="promises" class="hr"><span class="hr">PROMISES</span></h2>

<h3 style="text-align:center"><code>(X =&gt; (), Err =&gt; ()) =&gt; ()</code></h3>
<h4 style="text-align:center">A promise is (with some details omitted:)<br />a setter of two setters, with additional guarantees</h4>

<p>&lt;?xml version=”1.0” encoding=”UTF-8” standalone=”no”?&gt;</p>
<svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" width="100%" height="400" viewBox="0 0 158.74999 57" version="1.1">
  <g transform="translate(0,-220)">
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 15.393816,254.60231 H 143.99905 l 4.90616,15.59154 H 10.491946 Z" id="function-bg" />
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 31.306206,204.01193 h 35.60154 l -1e-5,15.59154 h -40.5034 z" id="iterable-bg" />
    <text xml:space="preserve" x="37.299671" y="213.99203" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Iterable</tspan></text>
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.151916,204.01193 h 24.68811 l 3e-4,15.59154 h -24.68874 z" id="promise-bg" />
    <text xml:space="preserve" x="69.284836" y="214.01372" style="opacity:1;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Promise</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 26.011436,220.87593 h 40.95695 l -10e-6,15.59154 h -45.85881 z" id="gettergetter-bg" />
    <text xml:space="preserve" x="28" y="230.8" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter-getter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.131956,220.87593 h 65.297924 l 4.90616,15.59154 H 68.131086 Z" id="settersetter-bg" />
    <text xml:space="preserve" x="85.400581" y="230.89413" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter-setter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 20.703206,237.7389 h 46.26576 l -10e-6,15.59154 h -51.16762 z" id="getter-bg" />
    <text xml:space="preserve" x="34.422062" y="247.7571" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.118406,237.73893 h 70.603814 l 4.90616,15.59154 H 68.117526 Z" id="setter-bg" />
    <text xml:space="preserve" x="96.157547" y="247.75713" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter</tspan></text>
    <text xml:space="preserve" x="67.759735" y="264.60794" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Function</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 10.104156,271.46572 H 149.2976 l 4.90616,15.59154 H 5.2022871 Z" id="values-bg" />
    <text xml:space="preserve" x="72" y="281.44595" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Value</tspan></text>
  </g>
</svg>

<p>While setter-setters are powerful, they can be very unpredictable due to inversion of control. They can be synchronous or asynchronous, and can deliver zero or one or multiple values over time. Promises are a special kind of setter-setters that provide some guarantees on the delivery of values:</p>

<ul>
  <li>The inner setter (the “callback”) is never called synchronously</li>
  <li>The inner setter is called at most once</li>
  <li>An optional second setter is provided for delivering error values</li>
</ul>

<p>Compare the setter-setter below with an equivalent Promise. The Promise will deliver the value only once, and not between the two console.log calls because the value delivery is asynchronous:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">setSetTen</span><span class="p">(</span><span class="nx">setTen</span><span class="p">)</span> <span class="p">{</span>
  <span class="nx">setTen</span><span class="p">(</span><span class="mi">10</span><span class="p">)</span>
  <span class="nx">setTen</span><span class="p">(</span><span class="mi">10</span><span class="p">)</span>
<span class="p">}</span>

<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">before setSetTen</span><span class="dl">'</span><span class="p">);</span>
<span class="nx">setSetTen</span><span class="p">(</span><span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">);</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">after setSetTen</span><span class="dl">'</span><span class="p">);</span>

<span class="c1">// (Log shows:)</span>
<span class="c1">// before setSetTen</span>
<span class="c1">// 10</span>
<span class="c1">// 10</span>
<span class="c1">// after setSetTen</span>
</code></pre></div></div>

<p>Compared with:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">const</span> <span class="nx">tenPromise</span> <span class="o">=</span> <span class="k">new</span> <span class="nb">Promise</span><span class="p">(</span><span class="kd">function</span> <span class="nx">setSetTen</span><span class="p">(</span><span class="nx">setTen</span><span class="p">)</span> <span class="p">{</span>
  <span class="nx">setTen</span><span class="p">(</span><span class="mi">10</span><span class="p">);</span>
  <span class="nx">setTen</span><span class="p">(</span><span class="mi">10</span><span class="p">);</span>
<span class="p">});</span>

<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">before Promise.then</span><span class="dl">'</span><span class="p">);</span>
<span class="nx">tenPromise</span><span class="p">.</span><span class="nx">then</span><span class="p">(</span><span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">);</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">after Promise.then</span><span class="dl">'</span><span class="p">);</span>

<span class="c1">// (Log shows:)</span>
<span class="c1">// before Promise.then</span>
<span class="c1">// after Promise.then</span>
<span class="c1">// 10</span>
</code></pre></div></div>

<p>Promises conveniently represent <em>one asynchronous and non-reusable value</em>, and since ES2017 have a syntax sugar for production and consumption: <code class="language-plaintext highlighter-rouge">async</code>–<code class="language-plaintext highlighter-rouge">await</code>. To consume the value within a Promise, use <code class="language-plaintext highlighter-rouge">await</code> only in functions prefixed with the keyword <code class="language-plaintext highlighter-rouge">async</code>:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">async</span> <span class="kd">function</span> <span class="nx">main</span><span class="p">()</span> <span class="p">{</span>
  <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">before await</span><span class="dl">'</span><span class="p">);</span>
  <span class="kd">const</span> <span class="nx">ten</span> <span class="o">=</span> <span class="k">await</span> <span class="k">new</span> <span class="nb">Promise</span><span class="p">(</span><span class="kd">function</span> <span class="nx">setSetTen</span><span class="p">(</span><span class="nx">setTen</span><span class="p">)</span> <span class="p">{</span>
    <span class="nx">setTen</span><span class="p">(</span><span class="mi">10</span><span class="p">);</span>
  <span class="p">});</span>
  <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">ten</span><span class="p">);</span>
  <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">after await</span><span class="dl">'</span><span class="p">);</span>
<span class="p">}</span>

<span class="nx">main</span><span class="p">();</span>

<span class="c1">// (Log shows:)</span>
<span class="c1">// before await</span>
<span class="c1">// 10</span>
<span class="c1">// after await</span>
</code></pre></div></div>

<p>The syntax sugar <code class="language-plaintext highlighter-rouge">async</code>–<code class="language-plaintext highlighter-rouge">await</code> can also be used to create a Promise, because the <code class="language-plaintext highlighter-rouge">async function</code> returns a Promise which delivers the value that was <code class="language-plaintext highlighter-rouge">return</code>‘d in the function.</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">async</span> <span class="kd">function</span> <span class="nx">getTenPromise</span><span class="p">()</span> <span class="p">{</span>
  <span class="k">return</span> <span class="mi">10</span><span class="p">;</span>
<span class="p">}</span>
<span class="kd">const</span> <span class="nx">tenPromise</span> <span class="o">=</span> <span class="nx">getTenPromise</span><span class="p">();</span>

<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">before Promise.then</span><span class="dl">'</span><span class="p">);</span>
<span class="nx">tenPromise</span><span class="p">.</span><span class="nx">then</span><span class="p">(</span><span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">);</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">after Promise.then</span><span class="dl">'</span><span class="p">);</span>

<span class="c1">// (Log shows:)</span>
<span class="c1">// before Promise.then</span>
<span class="c1">// after Promise.then</span>
<span class="c1">// 10</span>
</code></pre></div></div>

<h2 id="observables" class="hr"><span class="hr">OBSERVABLES</span></h2>

<h3 style="text-align:center"><code>(X =&gt; (), Err =&gt; (), () =&gt; ()) =&gt; ()</code></h3>
<h4 style="text-align:center">An observable is (with some details omitted:)<br />a setter of three setters, with additional guarantees</h4>

<p>&lt;?xml version=”1.0” encoding=”UTF-8” standalone=”no”?&gt;</p>
<svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" width="100%" height="400" viewBox="0 0 158.74999 57" version="1.1">
  <g transform="translate(0,-220)">
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 15.393816,254.60231 H 143.99905 l 4.90616,15.59154 H 10.491946 Z" id="function-bg" />
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 31.306206,204.01193 h 35.60154 l -1e-5,15.59154 h -40.5034 z" id="iterable-bg" />
    <text xml:space="preserve" x="37.299671" y="213.99203" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Iterable</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.151916,204.01193 h 24.68811 l 3e-4,15.59154 h -24.68874 z" id="promise-bg" />
    <text xml:space="preserve" x="69.284836" y="214.01372" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Promise</tspan></text>
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 94.097966,204.01193 h 34.014774 l 4.90616,15.59154 H 94.097096 Z" id="observable-bg" />
    <text xml:space="preserve" x="96.429359" y="214.01372" style="opacity:1;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Observable</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 26.011436,220.87593 h 40.95695 l -10e-6,15.59154 h -45.85881 z" id="gettergetter-bg" />
    <text xml:space="preserve" x="28" y="230.8" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter-getter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.131956,220.87593 h 65.297924 l 4.90616,15.59154 H 68.131086 Z" id="settersetter-bg" />
    <text xml:space="preserve" x="85.400581" y="230.89413" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter-setter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 20.703206,237.7389 h 46.26576 l -10e-6,15.59154 h -51.16762 z" id="getter-bg" />
    <text xml:space="preserve" x="34.422062" y="247.7571" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.118406,237.73893 h 70.603814 l 4.90616,15.59154 H 68.117526 Z" id="setter-bg" />
    <text xml:space="preserve" x="96.157547" y="247.75713" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter</tspan></text>
    <text xml:space="preserve" x="67.759735" y="264.60794" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Function</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 10.104156,271.46572 H 149.2976 l 4.90616,15.59154 H 5.2022871 Z" id="values-bg" />
    <text xml:space="preserve" x="72" y="281.44595" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Value</tspan></text>
  </g>
</svg>

<p>Like Iterables were a special type of getter-getter with the added capability of signalling completion, Observables are a type of setter-setter that add completion capability too. Typical setter-setters in JavaScript, like <code class="language-plaintext highlighter-rouge">element.addEventListener</code>, don’t notify whether the stream of events is done, so this makes it difficult to concatenate event streams or do other completion-related logic.</p>

<p>Unlike Iterables, which are standardized in the JavaScript specification, Observables are loosely-agreed conventions found among several libraries such as <a href="https://github.com/ReactiveX/rxjs">RxJS</a>, <a href="https://github.com/cujojs/most">most.js</a>, <a href="https://github.com/staltz/xstream/">xstream</a>, <a href="https://baconjs.github.io/">Bacon.js</a>, etc. Although <a href="https://github.com/tc39/proposal-observable">Observables are being considered</a> as a TC39 proposal, the proposal is in flux, so in this article let us assume the <a href="https://github.com/staltz/fantasy-observable">Fantasy Observable</a> specification, which libraries like RxJS, most.js and xstream have traditionally followed.</p>

<p>Observables are <a href="http://csl.stanford.edu/~christos/pldi2010.fit/meijer.duality.pdf">the dual of Iterables</a>, and this can be seen through some symmetries:</p>

<ul>
  <li><strong>Iterable</strong>
    <ul>
      <li>Is an object</li>
      <li>Has the “iterate” method, a.k.a. <code class="language-plaintext highlighter-rouge">Symbol.iterator</code></li>
      <li>“iterate” method is a <strong>getter</strong> of an Iterator object</li>
      <li>Iterator object has a <code class="language-plaintext highlighter-rouge">next</code> method as a <strong>getter</strong></li>
    </ul>
  </li>
  <li><strong>Observable</strong>
    <ul>
      <li>Is an object</li>
      <li>Has the “observe” method, a.k.a. <code class="language-plaintext highlighter-rouge">subscribe</code></li>
      <li>“observe” method is a <strong>setter</strong> of an Observer object</li>
      <li>Observer object has a <code class="language-plaintext highlighter-rouge">next</code> method as a <strong>setter</strong></li>
    </ul>
  </li>
</ul>

<p>The observer object also can contain two other methods, <code class="language-plaintext highlighter-rouge">complete</code> and <code class="language-plaintext highlighter-rouge">error</code>, to indicate successful completion and failed completion, respectively. The <code class="language-plaintext highlighter-rouge">complete</code> setter is equivalent to the <code class="language-plaintext highlighter-rouge">done</code> indicator in Iterables, and the <code class="language-plaintext highlighter-rouge">error</code> setter is equivalent to throwing an exception from the iterator getter.</p>

<p>Like Promises, Observables add some guarantees on the delivery of values:</p>

<ul>
  <li>Once the <code class="language-plaintext highlighter-rouge">complete</code> setter is called, the <code class="language-plaintext highlighter-rouge">error</code> setter will not be called</li>
  <li>Once the <code class="language-plaintext highlighter-rouge">error</code> setter is called, the <code class="language-plaintext highlighter-rouge">complete</code> setter will not be called</li>
  <li>Once the <code class="language-plaintext highlighter-rouge">complete</code> setter or the <code class="language-plaintext highlighter-rouge">error</code> setter were called, the <code class="language-plaintext highlighter-rouge">next</code> setter will not be called</li>
</ul>

<p>In the example below, the Observable represents an asynchronous and <em>finite</em> sequence of numbers:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">const</span> <span class="nx">oddNums</span> <span class="o">=</span> <span class="p">{</span>
  <span class="na">subscribe</span><span class="p">:</span> <span class="p">(</span><span class="nx">observer</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="p">{</span>
    <span class="kd">let</span> <span class="nx">x</span> <span class="o">=</span> <span class="mi">40</span><span class="p">;</span>
    <span class="kd">let</span> <span class="nx">clock</span> <span class="o">=</span> <span class="nx">setInterval</span><span class="p">(()</span> <span class="o">=&gt;</span> <span class="p">{</span>
      <span class="k">if</span> <span class="p">(</span><span class="nx">x</span> <span class="o">&lt;=</span> <span class="mi">48</span><span class="p">)</span> <span class="p">{</span>
        <span class="nx">observer</span><span class="p">.</span><span class="nx">next</span><span class="p">(</span><span class="nx">x</span><span class="p">);</span>
        <span class="nx">x</span> <span class="o">+=</span> <span class="mi">2</span><span class="p">;</span>
      <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
        <span class="nx">observer</span><span class="p">.</span><span class="nx">complete</span><span class="p">();</span>
        <span class="nx">clearInterval</span><span class="p">(</span><span class="nx">clock</span><span class="p">);</span>
      <span class="p">}</span>
    <span class="p">},</span> <span class="mi">1000</span><span class="p">);</span>
  <span class="p">}</span>
<span class="p">};</span>

<span class="nx">oddNums</span><span class="p">.</span><span class="nx">subscribe</span><span class="p">({</span>
  <span class="na">next</span><span class="p">:</span> <span class="nx">x</span> <span class="o">=&gt;</span> <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">x</span><span class="p">),</span>
  <span class="na">complete</span><span class="p">:</span> <span class="p">()</span> <span class="o">=&gt;</span> <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">done</span><span class="dl">'</span><span class="p">),</span>
<span class="p">});</span>

<span class="c1">// (Log shows:)</span>
<span class="c1">// 40</span>
<span class="c1">// 42</span>
<span class="c1">// 44</span>
<span class="c1">// 46</span>
<span class="c1">// 48</span>
<span class="c1">// done</span>
</code></pre></div></div>

<p>Like with setter-setters, Observables cause inversion of control, so the consumption side (<code class="language-plaintext highlighter-rouge">oddNums.subscribe</code>) has no way of pausing or cancelling the incoming flow of data. Most Observable implementations add one important detail to allow cancellation to be transmitted from consumer to producer: Subscriptions.</p>

<p>The <code class="language-plaintext highlighter-rouge">subscribe</code> function can return an object – the subscription – with one method: <code class="language-plaintext highlighter-rouge">unsubscribe</code>, which the consumer side can use to abort the incoming flow of data. Thus, the <code class="language-plaintext highlighter-rouge">subscribe</code> is not anymore a setter, because it’s a function with both input (observer) and output (subscription). Below, we add a subscription object to our previous example:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">const</span> <span class="nx">oddNums</span> <span class="o">=</span> <span class="p">{</span>
  <span class="na">subscribe</span><span class="p">:</span> <span class="p">(</span><span class="nx">observer</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="p">{</span>
    <span class="kd">let</span> <span class="nx">x</span> <span class="o">=</span> <span class="mi">40</span><span class="p">;</span>
    <span class="kd">let</span> <span class="nx">clock</span> <span class="o">=</span> <span class="nx">setInterval</span><span class="p">(()</span> <span class="o">=&gt;</span> <span class="p">{</span>
      <span class="k">if</span> <span class="p">(</span><span class="nx">x</span> <span class="o">&lt;=</span> <span class="mi">48</span><span class="p">)</span> <span class="p">{</span>
        <span class="nx">observer</span><span class="p">.</span><span class="nx">next</span><span class="p">(</span><span class="nx">x</span><span class="p">);</span>
        <span class="nx">x</span> <span class="o">+=</span> <span class="mi">2</span><span class="p">;</span>
      <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
        <span class="nx">observer</span><span class="p">.</span><span class="nx">complete</span><span class="p">();</span>
        <span class="nx">clearInterval</span><span class="p">(</span><span class="nx">clock</span><span class="p">);</span>
      <span class="p">}</span>
    <span class="p">},</span> <span class="mi">1000</span><span class="p">);</span>
    <span class="c1">// 🔷 Subscription:</span>
    <span class="k">return</span> <span class="p">{</span>
      <span class="na">unsubscribe</span><span class="p">:</span> <span class="p">()</span> <span class="o">=&gt;</span> <span class="p">{</span>
        <span class="nx">clearInterval</span><span class="p">(</span><span class="nx">clock</span><span class="p">);</span>
      <span class="p">}</span>
    <span class="p">};</span>
  <span class="p">}</span>
<span class="p">};</span>

<span class="kd">const</span> <span class="nx">subscription</span> <span class="o">=</span> <span class="nx">oddNums</span><span class="p">.</span><span class="nx">subscribe</span><span class="p">({</span>
  <span class="na">next</span><span class="p">:</span> <span class="nx">x</span> <span class="o">=&gt;</span> <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">x</span><span class="p">),</span>
  <span class="na">complete</span><span class="p">:</span> <span class="p">()</span> <span class="o">=&gt;</span> <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">done</span><span class="dl">'</span><span class="p">),</span>
<span class="p">});</span>

<span class="c1">// 🔷 Cancel the incoming flow of data after 2.5 seconds</span>
<span class="nx">setTimeout</span><span class="p">(()</span> <span class="o">=&gt;</span> <span class="p">{</span>
  <span class="nx">subscription</span><span class="p">.</span><span class="nx">unsubscribe</span><span class="p">();</span>
<span class="p">},</span> <span class="mi">2500</span><span class="p">);</span>

<span class="c1">// (Log shows:)</span>
<span class="c1">// 40</span>
<span class="c1">// 42</span>
</code></pre></div></div>

<h2 id="async-iterables" class="hr"><span class="hr">ASYNC ITERABLES</span></h2>

<h3 style="text-align:center"><code>() =&gt; (() =&gt; Promise&lt;{done, value}&gt;)</code></h3>
<h4 style="text-align:center">An async iterable is (with some details omitted:)<br />like an iterable that yields promises of values</h4>

<p>&lt;?xml version=”1.0” encoding=”UTF-8” standalone=”no”?&gt;</p>
<svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" width="100%" height="440" viewBox="0 0 158.74999 116.41654" version="1.1">
  <g transform="translate(0,-180.58345)">
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 15.393816,254.60231 H 143.99905 l 4.90616,15.59154 H 10.491946 Z" id="function-bg" />
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 31.306206,204.01193 h 35.60154 l -1e-5,15.59154 h -40.5034 z" id="iterable-bg" />
    <text xml:space="preserve" x="37.299671" y="213.99203" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Iterable</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.151916,204.01193 h 24.68811 l 3e-4,15.59154 h -24.68874 z" id="promise-bg" />
    <text xml:space="preserve" x="69.284836" y="214.01372" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Promise</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 94.097966,204.01193 h 34.014774 l 4.90616,15.59154 H 94.097096 Z" id="observable-bg" />
    <text xml:space="preserve" x="96.429359" y="214.01372" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Observable</tspan></text>
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 36.603636,187.14867 h 51.097575 l 4.90616,15.59154 H 31.701766 Z" id="asynciterable-bg" />
    <text xml:space="preserve" x="43.947285" y="197" style="opacity:1;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>AsyncIterable</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 26.011436,220.87593 h 40.95695 l -10e-6,15.59154 h -45.85881 z" id="gettergetter-bg" />
    <text xml:space="preserve" x="28" y="230.8" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter-getter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.131956,220.87593 h 65.297924 l 4.90616,15.59154 H 68.131086 Z" id="settersetter-bg" />
    <text xml:space="preserve" x="85.400581" y="230.89413" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter-setter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 20.703206,237.7389 h 46.26576 l -10e-6,15.59154 h -51.16762 z" id="getter-bg" />
    <text xml:space="preserve" x="34.422062" y="247.7571" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.118406,237.73893 h 70.603814 l 4.90616,15.59154 H 68.117526 Z" id="setter-bg" />
    <text xml:space="preserve" x="96.157547" y="247.75713" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter</tspan></text>
    <text xml:space="preserve" x="67.759735" y="264.60794" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Function</tspan></text>
    <path style="opacity:0.6;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 10.104156,271.46572 H 149.2976 l 4.90616,15.59154 H 5.2022871 Z" id="values-bg" />
    <text xml:space="preserve" x="72" y="281.44595" style="opacity:0.6;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Value</tspan></text>
  </g>
</svg>

<p>Iterables can represent any infinite or finite sequence of values, but they have one limitation: the value must be synchronously available as soon as the consumer calls the <code class="language-plaintext highlighter-rouge">next()</code> method. AsyncIterables extend the power of Iterables by allowing values to be delivered “later”, not immediately when requested.</p>

<p>AsyncIterables implement asynchronous delivery of values by using Promises, because a Promise represents a single asynchronous value. Every time the iterator’s <code class="language-plaintext highlighter-rouge">next()</code> (the inner getter function) is called, a Promise is created and returned.</p>

<p>In the example below, we take the <code class="language-plaintext highlighter-rouge">oddNums</code> Iterable example and make it yield Promises of values that resolve after a delay:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">slowResolve</span><span class="p">(</span><span class="nx">val</span><span class="p">)</span> <span class="p">{</span>
  <span class="k">return</span> <span class="k">new</span> <span class="nb">Promise</span><span class="p">(</span><span class="nx">resolve</span> <span class="o">=&gt;</span> <span class="p">{</span>
    <span class="nx">setTimeout</span><span class="p">(()</span> <span class="o">=&gt;</span> <span class="nx">resolve</span><span class="p">(</span><span class="nx">val</span><span class="p">),</span> <span class="mi">1000</span><span class="p">);</span>
  <span class="p">});</span>
<span class="p">}</span>

<span class="kd">function</span><span class="o">*</span> <span class="nx">oddNums</span><span class="p">()</span> <span class="p">{</span>
  <span class="kd">let</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">40</span><span class="p">;</span>
  <span class="k">while</span> <span class="p">(</span><span class="kc">true</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">if</span> <span class="p">(</span><span class="nx">i</span> <span class="o">&lt;=</span> <span class="mi">48</span><span class="p">)</span> <span class="p">{</span>
      <span class="k">yield</span> <span class="nx">slowResolve</span><span class="p">(</span><span class="nx">i</span><span class="p">);</span> <span class="c1">// 🔷 yield a Promise</span>
      <span class="nx">i</span> <span class="o">+=</span> <span class="mi">2</span><span class="p">;</span>
    <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
      <span class="k">return</span><span class="p">;</span>
    <span class="p">}</span>
  <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>To consume an AsyncIterable, we can just <em>await</em> each yielded Promise before requesting the next Promise:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">async</span> <span class="kd">function</span> <span class="nx">main</span><span class="p">()</span> <span class="p">{</span>
  <span class="k">for</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">promise</span> <span class="k">of</span> <span class="nx">oddNums</span><span class="p">())</span> <span class="p">{</span>
    <span class="kd">const</span> <span class="nx">x</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">promise</span><span class="p">;</span>
    <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">x</span><span class="p">);</span>
  <span class="p">}</span>
  <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">done</span><span class="dl">'</span><span class="p">);</span>
<span class="p">}</span>

<span class="nx">main</span><span class="p">();</span>

<span class="c1">// (Log shows:)</span>
<span class="c1">// 40</span>
<span class="c1">// 42</span>
<span class="c1">// 44</span>
<span class="c1">// 46</span>
<span class="c1">// 48</span>
<span class="c1">// done</span>
</code></pre></div></div>

<p>The example above creates a good intuition for AsyncIterables, but it is actually not a valid ES2018 AsyncIterable. What we did above was an ES6 Iterable of Promises, but ES2018 AsyncIterables are a getter-getter of a Promise of <code class="language-plaintext highlighter-rouge">{done, value}</code> objects. Compare these two:</p>

<ul>
  <li>Iterable of Promises: <code class="language-plaintext highlighter-rouge">() =&gt; (() =&gt; {done, value: Promise&lt;X&gt;})</code></li>
  <li>ES2018 AsyncIterable: <code class="language-plaintext highlighter-rouge">() =&gt; (() =&gt; Promise&lt;{done, value}&gt;)</code></li>
</ul>

<p>It is counterintuitive that ES2018 AsyncIterables are <em>not Iterables</em>, they are simply getter-getters of Promises, that resemble Iterables in many ways. The reason for this detail is that AsyncIterables also need to allow completion (the <code class="language-plaintext highlighter-rouge">done</code> boolean) to be sent asynchronously, so the Promise must <em>wrap the whole</em> <code class="language-plaintext highlighter-rouge">{done, value}</code> object.</p>

<p>Because AsyncIterables are not Iterables, they use different Symbols. While Iterables rely on <code class="language-plaintext highlighter-rouge">Symbol.iterator</code>, AsyncIterables use <code class="language-plaintext highlighter-rouge">Symbol.asyncIterator</code> instead. In the example below, we implement a valid ES2018 AsyncIterable that is similar to the previous example:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">const</span> <span class="nx">oddNums</span> <span class="o">=</span> <span class="p">{</span>
  <span class="p">[</span><span class="nb">Symbol</span><span class="p">.</span><span class="nx">asyncIterator</span><span class="p">]:</span> <span class="p">()</span> <span class="o">=&gt;</span> <span class="p">{</span>
    <span class="kd">let</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">40</span><span class="p">;</span>
    <span class="k">return</span> <span class="p">{</span>
      <span class="na">next</span><span class="p">:</span> <span class="p">()</span> <span class="o">=&gt;</span> <span class="p">{</span>
        <span class="k">if</span> <span class="p">(</span><span class="nx">i</span> <span class="o">&lt;=</span> <span class="mi">48</span><span class="p">)</span> <span class="p">{</span>
          <span class="kd">const</span> <span class="nx">next</span> <span class="o">=</span> <span class="nx">i</span><span class="p">;</span>
          <span class="nx">i</span> <span class="o">+=</span> <span class="mi">2</span><span class="p">;</span>
          <span class="k">return</span> <span class="nx">slowResolve</span><span class="p">({</span><span class="na">done</span><span class="p">:</span> <span class="kc">false</span><span class="p">,</span> <span class="na">value</span><span class="p">:</span> <span class="nx">next</span><span class="p">});</span>
        <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
          <span class="k">return</span> <span class="nx">slowResolve</span><span class="p">({</span><span class="na">done</span><span class="p">:</span> <span class="kc">true</span><span class="p">});</span>
        <span class="p">}</span>
      <span class="p">}</span>
    <span class="p">};</span>
  <span class="p">}</span>
<span class="p">};</span>

<span class="k">async</span> <span class="kd">function</span> <span class="nx">main</span><span class="p">()</span> <span class="p">{</span>
  <span class="kd">let</span> <span class="nx">iter</span> <span class="o">=</span> <span class="nx">oddNums</span><span class="p">[</span><span class="nb">Symbol</span><span class="p">.</span><span class="nx">asyncIterator</span><span class="p">]();</span>
  <span class="kd">let</span> <span class="nx">done</span> <span class="o">=</span> <span class="kc">false</span><span class="p">;</span>
  <span class="k">for</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">promise</span> <span class="o">=</span> <span class="nx">iter</span><span class="p">.</span><span class="nx">next</span><span class="p">();</span> <span class="o">!</span><span class="nx">done</span><span class="p">;</span> <span class="nx">promise</span> <span class="o">=</span> <span class="nx">iter</span><span class="p">.</span><span class="nx">next</span><span class="p">())</span> <span class="p">{</span>
    <span class="kd">const</span> <span class="nx">result</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">promise</span><span class="p">;</span>
    <span class="nx">done</span> <span class="o">=</span> <span class="nx">result</span><span class="p">.</span><span class="nx">done</span><span class="p">;</span>
    <span class="k">if</span> <span class="p">(</span><span class="o">!</span><span class="nx">done</span><span class="p">)</span> <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">result</span><span class="p">.</span><span class="nx">value</span><span class="p">);</span>
  <span class="p">}</span>
  <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">done</span><span class="dl">'</span><span class="p">);</span>
<span class="p">}</span>

<span class="nx">main</span><span class="p">();</span>
</code></pre></div></div>

<p>Like Iterables have the syntax sugars <code class="language-plaintext highlighter-rouge">function*</code> and <code class="language-plaintext highlighter-rouge">for</code>–<code class="language-plaintext highlighter-rouge">let</code>–<code class="language-plaintext highlighter-rouge">of</code>, and like Promises have the <code class="language-plaintext highlighter-rouge">async</code>–<code class="language-plaintext highlighter-rouge">await</code> syntax sugar, AsyncIterables in ES2018 come with two syntax sugar features:</p>

<ul>
  <li>Production side: <code class="language-plaintext highlighter-rouge">async function*</code></li>
  <li>Consumption side: <code class="language-plaintext highlighter-rouge">for</code>–<code class="language-plaintext highlighter-rouge">await</code>–<code class="language-plaintext highlighter-rouge">let</code>–<code class="language-plaintext highlighter-rouge">of</code></li>
</ul>

<p>In the example below, we use both features to create an asynchronous sequence of numbers, and consume them with a for-await loop:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">function</span> <span class="nx">sleep</span><span class="p">(</span><span class="nx">period</span><span class="p">)</span> <span class="p">{</span>
  <span class="k">return</span> <span class="k">new</span> <span class="nb">Promise</span><span class="p">(</span><span class="nx">resolve</span> <span class="o">=&gt;</span> <span class="p">{</span>
    <span class="nx">setTimeout</span><span class="p">(()</span> <span class="o">=&gt;</span> <span class="nx">resolve</span><span class="p">(</span><span class="kc">true</span><span class="p">),</span> <span class="nx">period</span><span class="p">);</span>
  <span class="p">});</span>
<span class="p">}</span>

<span class="c1">// 🔷 Production side can use both `await` and `yield`</span>
<span class="k">async</span> <span class="kd">function</span><span class="o">*</span> <span class="nx">oddNums</span><span class="p">()</span> <span class="p">{</span>
  <span class="kd">let</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">40</span><span class="p">;</span>
  <span class="k">while</span> <span class="p">(</span><span class="kc">true</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">if</span> <span class="p">(</span><span class="nx">i</span> <span class="o">&lt;=</span> <span class="mi">48</span><span class="p">)</span> <span class="p">{</span>
      <span class="k">await</span> <span class="nx">sleep</span><span class="p">(</span><span class="mi">1000</span><span class="p">);</span>
      <span class="k">yield</span> <span class="nx">i</span><span class="p">;</span>
      <span class="nx">i</span> <span class="o">+=</span> <span class="mi">2</span><span class="p">;</span>
    <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
      <span class="k">await</span> <span class="nx">sleep</span><span class="p">(</span><span class="mi">1000</span><span class="p">);</span>
      <span class="k">return</span><span class="p">;</span>
    <span class="p">}</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="k">async</span> <span class="kd">function</span> <span class="nx">main</span><span class="p">()</span> <span class="p">{</span>
  <span class="c1">// 🔷 Consumption side uses the new syntax `for await`</span>
  <span class="k">for</span> <span class="k">await</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">x</span> <span class="k">of</span> <span class="nx">oddNums</span><span class="p">())</span> <span class="p">{</span>
    <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">x</span><span class="p">);</span>
  <span class="p">}</span>
  <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="dl">'</span><span class="s1">done</span><span class="dl">'</span><span class="p">);</span>
<span class="p">}</span>

<span class="nx">main</span><span class="p">();</span>
</code></pre></div></div>

<p>Although they are new features, syntax sugars for AsyncIterables are already supported in Babel, TypeScript, Firefox, Chrome, Safari, and Node.js. AsyncIterables are convenient to combine Promise-based APIs (e.g. <code class="language-plaintext highlighter-rouge">fetch</code>) to create asynchronous sequences, such as listing the users in a database, requesting one user at a time:</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">async</span> <span class="kd">function</span><span class="o">*</span> <span class="nx">users</span><span class="p">(</span><span class="k">from</span><span class="p">,</span> <span class="nx">to</span><span class="p">)</span> <span class="p">{</span>
  <span class="k">for</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">x</span> <span class="o">=</span> <span class="k">from</span><span class="p">;</span> <span class="nx">x</span> <span class="o">&lt;=</span> <span class="nx">to</span><span class="p">;</span> <span class="nx">x</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
    <span class="kd">const</span> <span class="nx">res</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">fetch</span><span class="p">(</span><span class="dl">'</span><span class="s1">http://jsonplaceholder.typicode.com/users/</span><span class="dl">'</span> <span class="o">+</span> <span class="nx">x</span><span class="p">);</span>
    <span class="kd">const</span> <span class="nx">json</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">res</span><span class="p">.</span><span class="nx">json</span><span class="p">();</span>
    <span class="k">yield</span> <span class="nx">json</span><span class="p">;</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="k">async</span> <span class="kd">function</span> <span class="nx">main</span><span class="p">()</span> <span class="p">{</span>
  <span class="k">for</span> <span class="k">await</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">x</span> <span class="k">of</span> <span class="nx">users</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">10</span><span class="p">))</span> <span class="p">{</span>
    <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">x</span><span class="p">);</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="nx">main</span><span class="p">();</span>
</code></pre></div></div>

<h2 id="operators" class="hr"><span class="hr">OPERATORS</span></h2>

<p>The abstractions listed in this article are simply special cases of the JavaScript function. By definition, they cannot have more power than functions have, so this makes functions the most powerful and flexible abstraction. The downside of full flexibility is unpredictability. What these abstractions provide are <em>guarantees</em>, and based on guarantees you can write code that is more organized and more predictable.</p>

<p>Functions, on the other hand, are simply JavaScript values, and this allows them to be passed around and manipulated. This capability – passing functions as values – can also be used for the abstractions we saw in this article. We can pass Iterables or Observables or AsyncIterables around as values, and manipulate them along the way.</p>

<p>One of the most common manipulations is <code class="language-plaintext highlighter-rouge">map</code>, popular in Arrays, but relevant also for other abstractions. In the example below, we create the <code class="language-plaintext highlighter-rouge">map</code> operator for AsyncIterables, and use it to create an AsyncIterable of names of users in a database.</p>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">async</span> <span class="kd">function</span><span class="o">*</span> <span class="nx">users</span><span class="p">(</span><span class="k">from</span><span class="p">,</span> <span class="nx">to</span><span class="p">)</span> <span class="p">{</span>
  <span class="k">for</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">i</span> <span class="o">=</span> <span class="k">from</span><span class="p">;</span> <span class="nx">i</span> <span class="o">&lt;=</span> <span class="nx">to</span><span class="p">;</span> <span class="nx">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
    <span class="kd">const</span> <span class="nx">res</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">fetch</span><span class="p">(</span><span class="dl">'</span><span class="s1">http://jsonplaceholder.typicode.com/users/</span><span class="dl">'</span> <span class="o">+</span> <span class="nx">i</span><span class="p">);</span>
    <span class="kd">const</span> <span class="nx">json</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">res</span><span class="p">.</span><span class="nx">json</span><span class="p">();</span>
    <span class="k">yield</span> <span class="nx">json</span><span class="p">;</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="c1">// 🔷 Map operator for AsyncIterables</span>
<span class="k">async</span> <span class="kd">function</span><span class="o">*</span> <span class="nx">map</span><span class="p">(</span><span class="nx">inputAsyncIter</span><span class="p">,</span> <span class="nx">f</span><span class="p">)</span> <span class="p">{</span>
  <span class="k">for</span> <span class="k">await</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">x</span> <span class="k">of</span> <span class="nx">inputAsyncIter</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">yield</span> <span class="nx">f</span><span class="p">(</span><span class="nx">x</span><span class="p">);</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="k">async</span> <span class="kd">function</span> <span class="nx">main</span><span class="p">()</span> <span class="p">{</span>
  <span class="kd">const</span> <span class="nx">allUsers</span> <span class="o">=</span> <span class="nx">users</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">10</span><span class="p">);</span>
  <span class="c1">// 🔷 Pass `allUsers` around, create a new AsyncIterable `names`</span>
  <span class="kd">const</span> <span class="nx">names</span> <span class="o">=</span> <span class="nx">map</span><span class="p">(</span><span class="nx">allUsers</span><span class="p">,</span> <span class="nx">user</span> <span class="o">=&gt;</span> <span class="nx">user</span><span class="p">.</span><span class="nx">name</span><span class="p">);</span>
  <span class="k">for</span> <span class="k">await</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">name</span> <span class="k">of</span> <span class="nx">names</span><span class="p">)</span> <span class="p">{</span>
    <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">name</span><span class="p">);</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="nx">main</span><span class="p">();</span>
</code></pre></div></div>

<p>Writing the above code example with none of the abstractions in the Getter-Setter Pyramid requires more amount of code, which is also harder to read. Using operators and new syntax sugar features is how you can take advantage of these special cases of the function to do more with less code, without sacrificing readability.</p>

<p>&lt;?xml version=”1.0” encoding=”UTF-8” standalone=”no”?&gt;</p>
<svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" width="100%" height="440" viewBox="0 0 158.74999 116.41654" version="1.1">
  <g transform="translate(0,-180.58345)">
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 15.393816,254.60231 H 143.99905 l 4.90616,15.59154 H 10.491946 Z" id="function-bg" />
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 31.306206,204.01193 h 35.60154 l -1e-5,15.59154 h -40.5034 z" id="iterable-bg" />
    <text xml:space="preserve" x="37.299671" y="213.99203" style="opacity:1;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Iterable</tspan></text>
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.151916,204.01193 h 24.68811 l 3e-4,15.59154 h -24.68874 z" id="promise-bg" />
    <text xml:space="preserve" x="69.284836" y="214.01372" style="opacity:1;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Promise</tspan></text>
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 94.097966,204.01193 h 34.014774 l 4.90616,15.59154 H 94.097096 Z" id="observable-bg" />
    <text xml:space="preserve" x="96.429359" y="214.01372" style="opacity:1;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Observable</tspan></text>
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 36.603636,187.14867 h 51.097575 l 4.90616,15.59154 H 31.701766 Z" id="asynciterable-bg" />
    <text xml:space="preserve" x="43.947285" y="197" style="opacity:1;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>AsyncIterable</tspan></text>
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 26.011436,220.87593 h 40.95695 l -10e-6,15.59154 h -45.85881 z" id="gettergetter-bg" />
    <text xml:space="preserve" x="28" y="230.8" style="opacity:1;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter-getter</tspan></text>
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.131956,220.87593 h 65.297924 l 4.90616,15.59154 H 68.131086 Z" id="settersetter-bg" />
    <text xml:space="preserve" x="85.400581" y="230.89413" style="opacity:1;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter-setter</tspan></text>
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 20.703206,237.7389 h 46.26576 l -10e-6,15.59154 h -51.16762 z" id="getter-bg" />
    <text xml:space="preserve" x="34.422062" y="247.7571" style="opacity:1;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Getter</tspan></text>
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="m 68.118406,237.73893 h 70.603814 l 4.90616,15.59154 H 68.117526 Z" id="setter-bg" />
    <text xml:space="preserve" x="96.157547" y="247.75713" style="opacity:1;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Setter</tspan></text>
    <text xml:space="preserve" x="67.759735" y="264.60794" style="opacity:1;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Function</tspan></text>
    <path style="opacity:1;fill:#dfebfa;fill-opacity:1;stroke:#accbf2;stroke-width:0.26;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:17.20000076;stroke-dasharray:none;stroke-dashoffset:9.99999905;stroke-opacity:1" d="M 10.104156,271.46572 H 149.2976 l 4.90616,15.59154 H 5.2022871 Z" id="values-bg" />
    <text xml:space="preserve" x="72" y="281.44595" style="opacity:1;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:6px;font-family:'Roboto';fill:#34506c;fill-opacity:1;stroke-width:0.26"><tspan>Value</tspan></text>
  </g>
</svg>]]></content><author><name>André Staltz</name></author><category term="blog" /><summary type="html"><![CDATA[The cornerstone of JavaScript is the function. It is a flexible abstraction that works as the basis for other abstractions, such as Promises, Iterables, Observables, and others. I have been teaching these concepts in conferences and workshops, and over time I have found an elegant summary of these abstractions, layed out in a pyramid. In this blog post I’ll provide a tour through these layers in the pyramid. FUNCTIONS X =&gt; Y &lt;?xml version=”1.0” encoding=”UTF-8” standalone=”no”?&gt; Function Value The very base of JavaScript are the first-class values such as numbers, strings, objects, booleans, etc. Although you could still write a program that uses just values and control flow, very soon you would need to write a function to improve your program. Functions are unavoidable abstractions in JavaScript, they are often required for async I/O via callbacks. The word “function” in JavaScript does not refer to “pure functions” like in functional programming. It’s better to understand these as simply “procedures”, because they are just lazy reusable chunks of code, with optional input (the arguments), and optional output (the return). Compared to hard coded chunks of code, functions provide a couple important benefits: Laziness / reusability The code inside a function must be lazy (i.e. not executed unless called) for it to be reusable Implementation flexibility Consumers of the function don’t care how the function is internally implemented, so this means there is flexibility to implement the function in various ways]]></summary></entry></feed>