1. 2

    Is anyone using the Android version? I installed it on my phone but it doesn’t seem to ever ping me on its own even when I set average ping interval down to 1 minutes.

    1. 1

      Yeah I never tried the official software. It seems sufficiently easy to write your own that I figured I’d rather do that. I’ve also been considering writing an “experience sampling over SMS” sort of app. https://eprints.qut.edu.au/45974/2/45974.pdf

    1. 2

      Nice, I need to try this. This thing could actually be a use case for a smartwatch if you are mostly working on a few generic work categories you can just pick from a menu of previously entered items instead of typing text.

      1. 2

        Excellent essay!

        I’ve got a comment and a question..

        Comment. See also: PHP vs. insert your favorite web-app-appropriate language here.

        Question. Does this pattern mean that no financial incentives exist that are strong enough to overcome this individual-hacker independence? Can the power of Lisp not be harnessed, by a group, for profit? Why not?

        Edit: Perhaps the answer is here:

        The Lisp Curse does not contradict the maxim of Stanislav Datskovskiy: Employers much prefer that workers be fungible, rather than maximally productive.

        The author attributes this to the “venality” and “close-mindedness” of managers, but I wouldn’t be so quick to dismiss it. The expected productivity of an employee, after all, is his productivity × his reliability… and this is exacerbated by dependence of a team on any given employee’s work, etc. Selecting for fungibility preferentially to maximal productivity seems to me to be nothing more than perfectly rational optimization on a manager’s part.

        (Of course this is different from employees whose main value is to be sources of ideas, etc., such as in research labs and so forth, or for designers (especially design team leads), etc.—but that’s hardly the median case, and anyway is irrelevant to the subject of programming languages.)

        1. 3

          I don’t know how much of a case this is with Lisp, but a lot of Forth programmers say that while Forth makes for excellent productivity for a capable single programmer, it’s often quite impossible for any other Forth programmer to work on the first one’s program. The programs end up idiosyncratic private languages that are impenetrable without deep study. So it’s less about workers not being maximally fungible but possibly not being able to hire any new productive workers at all for an existing system.

          Maybe the whole programming system should be set up in something like this with a new kind of module system. The inside of a module is full of maximally productive weird hyper-expressive language shenanigans, and then somehow there’s a mutually comprehensible interface layer between the modules so that the software can be assembled from these interacting fiefdoms. That’s sort of how Unix and Unix programs worked, with the interface layer being plaintext streams. I’m not quite sure what this should ideally look like. The current libraries and APIs approach isn’t good enough, you’d want to be able to exchange structures, idioms and sublanguages, not just provide a black box with buttons to push. Also you would want to avoid boxing single components in a large system into suboptimal architectures by imposing too many constraints from the surrounding area, this part gets really hard.

          Maybe Alan Kay is on to something with the whole “cells receiving chemical signals and deciding what to do, not circuits receiving electrical impulses and reacting with lockstep determinism” metaphor he’s been trying to push for the last 40 years. (Also sorta related, Functional in the small, OO in the large.)

          1. 1

            Maybe Alan Kay is on to something with the whole “cells receiving chemical signals and deciding what to do, not circuits receiving electrical impulses and reacting with lockstep determinism” metaphor he’s been trying to push for the last 40 years.

            Do you have any links/references about this? It sounds really interesting, and I’ve not heard of it before!

            1. 1

              There’s a bit of it in the Programming and Scaling talk that was linked here a while ago.

              Some emails about the origin of OO, more on messaging.

            2. 1

              By the way, “Functional in the small, OO in the large” is a big part of how Swift works. (Well, how I write Swift, anyway. Non-functional (heh) approaches are certainly also available.)

            3. 3

              Does this pattern mean that no financial incentives exist that are strong enough to overcome this individual-hacker independence?

              Paul Graham:

              So you could say that using Lisp was an experiment. Our hypothesis was that if we wrote our software in Lisp, we’d be able to get features done faster than our competitors, and also to do things in our software that they couldn’t do. And because Lisp was so high-level, we wouldn’t need a big development team, so our costs would be lower. If this were so, we could offer a better product for less money, and still make a profit. We would end up getting all the users, and our competitors would get none, and eventually go out of business. That was what we hoped would happen, anyway.

              What were the results of this experiment? Somewhat surprisingly, it worked. We eventually had many competitors, on the order of twenty to thirty of them, but none of their software could compete with ours. We had a wysiwyg online store builder that ran on the server and yet felt like a desktop application. Our competitors had cgi scripts. And we were always far ahead of them in features. Sometimes, in desperation, competitors would try to introduce features that we didn’t have. But with Lisp our development cycle was so fast that we could sometimes duplicate a new feature within a day or two of a competitor announcing it in a press release. By the time journalists covering the press release got round to calling us, we would have the new feature too.

              It must have seemed to our competitors that we had some kind of secret weapon– that we were decoding their Enigma traffic or something. In fact we did have a secret weapon, but it was simpler than they realized. No one was leaking news of their features to us. We were just able to develop software faster than anyone thought possible.

              1. 2

                Yes, I’ve read that essay too, but—why only that example? Where are the others? If Lisp is so good, why isn’t everyone using it? Surely the fact that Paul Graham, of all people, used it, to make money, and now sings its praises, ought to spur a whole host of people to check it out, discover it, put it to use, and profit thereby? Why isn’t it way, way, way more popular?

                1. 1

                  It seems like when people try to use it, they find the ecosystem nasty. The very visible classic example of people buying pg’s Lisp evangelism, actually building a thing in Lisp and getting burned was the original Reddit. Another report of a reasonably capable newcomer trying to grab onto Common Lisp and bouncing off was Steve Yegge. It seems like you really need people building a common ground of solid libraries nowadays, and can’t really build solid stuff in an ecosystem where everybody has their own 80 % solution and the venerated standard solution was informed by 1980s computing practices.

                2. 2

                  I have issues with that essay. Paul Graham never seems to acknowledge the selection bias that using Lisp imposes. It’s entirely possible that the reason ViaWeb did well isn’t because it was written in Lisp, but because it attracted the sort of people who learn Lisp. It’s entirely possible that if he’d attracted those same programmers, but somehow managed to get them to use Perl, he’d have been just as productive.

                  Lisp was not (and still is not) a “mainstream” programming language. That means you have to go out of your way to learn Lisp. The sorts of programmers who go out of their way to learn obscure programming languages and libraries are more likely to be more productive (or, in Silicon Valley terms, “passionate”) programmers who would be able to bring that additional productivity to bear no matter what language they were using.

                  1. 1

                    Selection bias seems like it could explain a lot of this puzzle.

                    The question is, do we find the same effect for other, comparably obscure / non-mainstream languages?

              1. 1

                So, this is one of those interesting mixes of “hopelessly idealistic but totally correct” and “insane” that you sometimes get from people who are contrarian, principled, technically oriented, and confused about what things are for and why people do things.

                Let’s dive in.

                I – Obeys operator

                Yes. No quibble with this one. It’s inexcusable that this is ever not the case.

                II – Forgives mistakes

                This is absolutely the ideal, but note that implementing this would require a Herculean effort, all the more improbable because it would be distributed across many people, companies, etc. (For a roughly comparable, if smaller-scale, thing, see the Mac OS’s pervasive, system-wide, everything-interoperable-with-everything copy/paste and drag/drop support; to start from the idea for NSPasteboard and get to anything resembling today takes 20 years and 20 system versions, of course with various partial or weaker implementations along the way, etc.)

                III – Retains knowledge

                This seems to require that no large (multi-gigabyte) data ever be loaded wholly into RAM, nor operations on such data ever be conducted in RAM. How would following this law not result in a tremendous downgrade in performance?

                IV – Preserves meaning

                This is unrealistic. No closed-source software? Ok, good luck accomplishing anything in the real world.

                V – Survives disruptions

                Who the heck wants to interactively debug crashes? Also, what on earth is “an error condition requiring the operator’s intervention”?

                VI – Reveals purpose

                Again with the debugger… Well, this is an ok ideal, but it seems largely irrelevant to me, for most people in most cases.

                VII – Serves loyally

                This I agree with wholeheartedly.


                Here’s the crux of the matter:

                Most people aren’t programmers and don’t want to be.

                Any ideal, or plan, or design, that asks users to be programmers, or expects that they’ll be programmers, etc., is fundamentally confused and doomed to failure, as well as actually not being “ideal” in any meaningful sense.

                1. 1

                  It’s important to note that the whole ‘debug any crash’ thing comes from Lisp Machines. People who used lisp machines really liked that they could in principle recover from any error if they wanted, even if in practice they might decide to just let the thing go. It’s also where things like needing access to the source code come from, as the Lisp Machine allowed introspection abilities that were simply beyond what’s available on a modern *nix based system.

                  You can read more in this vein in the Unix Haters Handbook:

                  http://web.mit.edu/%7Esimsong/www/ugh.pdf

                  1. 1

                    My handwavy thought about the forgives mistakes / retains knowledge thing is that if we stick to human workstation use instead of scientific computing and similar data crunching, the amounts of data involved in something like video editing can be huge, but the data to describe the user inputs that led to the state should be small enough to serialize without much effort. So for the video editing case, you’d need to treat the source video file as sacrosanct, save user inputs, and then have the user operations be implemented in a way that they can be reversed or just re-walk the input sequence up to the earlier point from which you’d want to resume (which might be slow). Baking this so deeply into the OS+programming language layer that doing things like this would be a no-brainer for any new industrial 3D modeling or photoediting application you’d develop would of course be an interesting engineering exercise.

                    This wouldn’t get you quite the “up to the current state as fast as turning on a lamp” speed, but replayable operations would let you keep large complex state in RAM and be able to have the machine load the starting state and the operation sequence from non-volatile memory and then play back the operations to get you to your current state.

                    No closed-source software on client machines would still let you do business with proprietary software, you’d just need to run the bits you want to stay hidden on your own server. This part is hit hardest by the thing where regular users don’t want to be programmers. Users are going to notice if you can turn the machine off and turn it on and it starts instantly with all the stuff you had on it earlier, and they’ll expect it to keep doing that. They won’t notice or care if the software code has been piped through a code obfuscator.

                  1. 3

                    Okay so besides being generally cool, my question with this post is:

                    It cost the guy $9000 or so to build what is essentially a movable dwelling, and it looks quite cool. What I naturally think of then is what are the economics of if everyone decided to do this?

                    As in, right now the going price for a home is hundreds of thousands of dollars. What exactly is the thing that makes building a home so damn expensive that this bus leaves out? Would the cost of busing go up if more people decided to do it? My candidates would be:

                    • Living Space
                    • Electrical Wiring
                    • Real Plumbing
                    • Real Kitchen
                    • (EDIT) Laundry
                    1. 3

                      Those are all pretty important things!

                      Also, a house can house a whole family. Can you imagine living in that bus, as a family? That is hellish. Basically, “what are the economics if everyone lives in dorms / Soviet communal apartments”? Interesting abstract economic question, not remotely realistic actual scenario.

                      Edit: Also, it looks massively claustrophobic. I can see having a panic attack within a day of living there.

                      It’s probably relevant that there’s no talk, on that page, of anyone actually deciding to live there, much less actually going and living there for any non-trivial length of time.

                      Like, this is basically a trailer home. Trailer homes are a thing you can buy. How much do they cost? Is living in a trailer a comfortable or an uncomfortable experience? That’s a thing you can do, right now—buy a trailer home and live in it. Do people do that? Yeah. Do people do that who aren’t forced to do so by inescapable financial circumstances? No. Why not?

                      1. 2

                        Ahhh, it’s the family bit I hadn’t considered.

                        I should probably admit that building such a monstrosity and then driving off with it in one form or another has been one of my not-so-guilty fantasies for a while. I’m not sure it would really be feasible but damn is it attractive in many ways.

                        https://ruby-on-wheels.github.io/blog/the-first-150-days-of-van-life/

                        This article points out that this kind of thing probably only works in a suitable climate: https://www.consumerismcommentary.com/extreme-frugality-living-out-of-your-car/

                        As for not realistic, yeah I probably shouldn’t have said ‘everyone’. What I really meant was something more like “If say, 20% of the population decided to do this, what would happen?”

                        1. 2

                          Trying to partially answer my own question in the spirit in which it’s posed:

                          This kind of lifestyle implicitly relies on some gray market economics that only remain viable for as long the number of people freeloading is small. For example:

                          • ‘Public’ Bathrooms
                          • Parking space

                          Are both things that people doing this generally squat or take advantage of in ways that aren’t intended behavior. Once anything like this became remotely common those resources would start to become more locked down, and with the increase in people trying to utilize them the remaining spaces would become more crowded, causing even more previously ‘open’ spaces to lock down further. Ultimately people would probably have to pay for the land they’re using one way or another, and I don’t really know what the specific dollar amounts look like on that but with the sheer number people suddenly having demand I would expect it to be substantial enough to offset a lot of the benefit you’re paying for by giving up easy access to sane plumbing.

                          1. 1

                            Trailer parks are already a thing in the US, aren’t they?

                      1. 1

                        Sometimes you get some good manifesto fodder if you’re a cranky idealist who has little regard for the dizzyingly high local optimum peak current computing technology has climbed on.

                        1. 3

                          I think this is a high value problem for rationalists to solve, given that many problems would be much easier to reason about if you knew about all the prior approaches to the problem that had been tried and discarded. If information disappears or becomes inaccessible, that becomes impossible, and you no longer find yourself able to stand on the shoulders of giants. However, I don’t see very many other people talking about the problem of information preservation, and those that are don’t really seem to be doing a good job of it.

                          Actually, this is one of my major interests.

                          Why do you think I insisted that all the stories on Whistling Lobster be especially well indexed?

                          Why do you think I was so enamored when it turned out that the Lobsters software sends all submitted stories to archive.is?

                          Why do you think the feature I decided was most important for me to try and work on was search, of all things?

                          I’ve participated quite a bit in stuff like Archive Team precisely because I think that we need to be preserving knowledge now before we realize we want it later.

                          As for how to get organizations to do this, I don’t know. I’m currently pursuing an internship in which I try to document a lot of institutional knowledge in a white paper. I think that ‘person who preserves things’ is one of those roles that is probably unnamed in the business context.

                          1. 3

                            Why do you think I was so enamored when it turned out that the Lobsters software sends all submitted stories to archive.is?

                            It doesn’t seem to automatically send the links though, it just provides the search bookmark. If you check on the ‘cached’ link for this story for example, there are no results on archive.is.

                            EDIT: Is it even possible to automatically archive stuff there given how it’s constantly making human visitors solve captchas?

                            1. 2

                              Confirmed that the caching isn’t automatic, but I just saved the link for this story, and didn’t have to solve and captchas. Perhaps it is automatable, after all?

                              1. 1

                                This should now be fixed.

                              2. 1

                                Oh dear, that’s really unfortunate. I’ll have to work on that then. Thanks for pointing it out.

                            1. 1

                              Okay so I’ve kind of been laying off it because no one else is complaining but…I really feel like a lot of the links you post (and you do post a lot of them) are either useless insight porn or just plain off topic.

                              Could you please explain why you think this story is fit for Whistling Lobster?

                              1. 2

                                I’m trying to sound off the sense of what the on-topic should be exactly, so I post stuff that seems acceptable to me and see what kind of response it gets. I’ll try to stick to doing one link a day instead of spamming additional ones impulsively.

                                This one is definitely a breaking-news, not long-term interesting stuff one. It seemed unusually notable in the tech stuff context since it sounds like it can compromise wireless networks that are used just about everywhere. Though yeah, on retrospect “the internet is a horrible mess of security holes, news at 11” isn’t that impressive on a site that isn’t particularly tech focused.

                                1. 2

                                  I would say that cutting back on the current-events stuff is a good idea. WL is not a news site! There are plenty of other news sites out there.

                                  Consider a heuristic: a) is this on Hacker News? b) is this basically indistinguishable from all the other things on Hacker News? If yes to both, seriously consider whether it’s fit for WL.

                                  After all, what use WL, if we’re just HN? There’s already one HN… one’s enough! ;)

                                  1. 1

                                    P.S. Here’s the thing about “off-topic” (note, this is not to disagree with what Hypothesis said, just to phrase it somewhat otherwise):

                                    It’s not, exactly, that this is “off-topic”, per se, but that it’s insufficiently excellent—in exactly the way you say:

                                    breaking-news, not long-term interesting stuff

                                    We want the “long-term interesting stuff”! So, for example, some essay by a security researcher that was a broad overview / in-depth analysis of various sorts of security issues like this one—that would be “on topic”.

                                1. 1

                                  On the same theme as my other comment—Randal Schwartz, the quoted “Perl guru”, says in the comments to the linked article:

                                  Perl probably has a greater spread from beginner to expert than almost any other language, simply because it’s optimized for the experts.

                                  Is that because the beginners’ baseline is lower? (Also, what exactly is this a spread of? If it’s just “fluency in the language”, then it’s a largely useless thing to talk about, isn’t it? We want to know what people—whether experts or beginners—can do with it…)

                                  (Disclaimer to all of my comments on this topic: I haven’t written a line of Perl in my life.)

                                  1. 1

                                    Perl code can be very idiosyncratic and dense. The first challenge for beginners is just being able to read the code. Python is probably easier to read if you know very little about the language, and has cultural norms for keeping the code reasonably uniform and simple.

                                  1. 1

                                    An interesting set of claims. Here are the questions that occurred to me immediately when reading this:

                                    Of course you have to use any language, human or computer, to achieve and maintain fluency. But my sense is that Perl requires more frequent use than other programming languages in order to remain minimally competent, and it repays frequent use more than other languages. I imagine this is a consequence of the natural language principles baked into the language.

                                    Ok, but what exactly does “repays frequent use” cash out to? Is that just to get to the same baseline of fluency, and/or of expressiveness and power, as another language? Is Perl “unusually expressive”? And what does that mean, exactly?

                                    1. 2

                                      As far as I understand it, the expressiveness cashes out to how much you can do with very short programs. Perl programmers do a thing where they come up with programs that are so short that they just directly write them on the command line instead of bothering to save them in a file first. The APL/J programming language family is another notably expressive and probably frequent use repaying one.

                                      1. 1

                                        Interesting. So, the idea is that Perl, more so that (or even entirely unlike) other languages, if it’s frequently used, lets you do a lot with very short programs?

                                        That could certainly be useful. What’s the scope of what sorts of things can be done in this way?

                                        1. 2

                                          I’ve written something like two Perl programs myself, but I understand a lot of the uses are for setting up Unix pipe systems, where you have your Perl script consume one stream of structured text and emit some cleverly transformed new stream of structured text. That’s what most of the programs in the oneliner book I linked above seem to be like. I don’t think Perl users are big on Haskell style mathematical abstraction, but I don’t really know what the wider reaches of the Perl sphere are like.

                                    1. 1

                                      This is off-topic for the post, and I am definitely going to comment substantively once I finish reading what looks like a very interesting post, but for now, I really just want to make a public service announcement, to anyone who might be reading this and designs websites at all:

                                      STOP USING 100 WEIGHT RALEWAY FOR BODY TEXT!

                                      For god’s sake, that font is a display font, meant for giant headings in 100px size, not body text! (Yes, this does make me sympathsize with people who turn off webfonts.)

                                      1. 2

                                        Can you ELI5 why this is bad?

                                        1. 2

                                          Totally. It’s very simple:

                                          Raleway was originally released only in 100 weight, a.k.a. “Thin”—the thinnest weight most fonts are released in. (It was later expanded to a full set of weights by Google Fonts.) It’s a beautiful font. It’s become fashionable, in web design circles.

                                          But at weight 100, it’s very thin. At body text sizes, it’s so thin as to be unreadable, even on Macs. Using such a thin font for body text is foolish. There’s no reason for it. People do it because they’re following a design fad, without even understanding that fad—Raleway in 100 weight makes a good font for headers, and other text in large point sizes, and that’s what makes it popular (you can see it being put to good use in the page header of Slate Star Codex, for instance)—that’s what a “display font” is for. For body text, a heavier weight of font is called for! (Why not use a heavier weight of Raleway itself, for example? It’s available from Google Fonts! For free!)

                                          1. 2

                                            Checked the font, the thinnest version in the font render page does look nasty, but that’s not what I see on the page. Here’s what the weight renders and the article text look like for me.

                                            1. 1

                                              Interesting! That’s… mildly better, though still not heavy enough for readability, in my view. Perhaps they serve a heavier weight to non-Macs (I seem to recall your OS of choice being some sort of Linux?).

                                      1. 3

                                        Oh boy, a Ribbonfarm post… :p

                                        I am terribly torn about having Ribbonfarm links on WL. On the one hand, sometimes there’s some very insightful stuff in there. On the other hand, as a friend of mine would say: it’s the most insight porniest insight porn that ever insight porned.

                                        So let me go ahead and ask that we have a norm of “never a Ribbonfarm link without a comment that says what specifically is the takeaway”.

                                        (In this case, it looks like you linked to a comment and not to the actual Ribbonfarm post, so the situation’s a bit different. I think quoting the comment wholesale isn’t unreasonable; what does everyone else think?)

                                        (Also, unrelatedly, stylistic note: Word Caps—capitalizing each word in a sentence—is also known as “title case”, as it indicates that the sentence is a title. That is inappropriate in this case, as the link text isn’t the title of anything, it’s just a summary of the thing you’re linking to. Sentence case—initial-word caps only—is appropriate here. I’ve gone ahead and changed it.)

                                        1. 3

                                          “never a Ribbonfarm link without a comment that says what specifically is the takeaway”.

                                          Wonder how hard it would be to make the “submit link” page support messages like “hey, looks like you’re you’re submitting a link to $DOMAIN, you should know we have $COMMUNITY_NORM for links to $DOMAIN”.

                                          1. 2

                                            That’s an excellent idea, and I’ve added an issue to that effect to the issue tracker.

                                          2. 1

                                            Comment quote added, I think that works yeah. Thanks for the tip about casing too, I was a bit confused.

                                          1. 1

                                            Bad analogy. I am talking about books in general, sentimental value applies only to specific books.

                                            I loved my teddy bear*, but I don’t claim all teddy bears are sacred, or that everyone should love teddy bears, or that anyone who doesn’t is fundamentally flawed in some way, or that the destruction of a teddy bear is inherently wrong.

                                            But that’s just the thing. You might not love each and every teddy bear, but someone does. Thus, even if a particular physical book is not valuable to you, I would say it’s valuable to someone. I find the rationalist tendency to treat books as disposable objects somewhat abhorrent.

                                            1. 1

                                              There are a lot of particular physical mass market paperback books in the world right now. You need to work pretty hard to get all of them matched with affection from an actual human. I think this is actually going to change now that ebooks and e-readers are an established thing. The physically nasty books that basically just serve as maximally cheap text delivery platforms are first on the line to get dropped in favor of ebooks. The physical books that have a better value proposition against ebooks are nice, more expensive durable hardcover ones. And there might be fewer of those overall and biased to books people want to keep around on their bookshelf instead of reading through once and then forgetting about.

                                              1. 2

                                                There’s something to what you say, I agree, but ultimately I think that drawing the distinction between “softcover” and “hardcover” takes this in the wrong direction. I have some softcover books which I value quite a bit, more than I value some of my hardcover books. The actual spectrum of value is perhaps not entirely orthogonal to the book’s physical form factor (there is a correlation, for various obvious reasons), but it’s orthogonal enough that I don’t consider it to be a very useful dimension to think about. The following factors all seem more relevant:

                                                • Content
                                                • Availability/rarity
                                                • Uniqueness (related to the previous point but not identical to it)
                                                • Personal history with the book

                                                (Some examples, from glancing at my bookshelf: my hardcover mass-market volumes of Sergei Lukyanenko’s less… artistically valuable… novels, vs. my softcover copies of Gödel, Escher, Bach and The Cyberiad; a hardcover copy of Scott Adams’ The Dilbert Principles vs. a softcover copy of Robyn Dawes’ Rational Choice in an Uncertain World; etc. Also, I have some beautifully custom-bound, hardcover, gold-embossed volumes of mass-market cheaply printed Soviet children’s books; where do you classify those? And this is just the tip of the iceberg, example-wise…)

                                                1. 1

                                                  Yeah, I was mostly going for the “correlation for various obvious reasons”, and that there are currently probably lots of physical books that are of pretty low value, not an absolute demarcation at exactly what the non-valuable books are like.

                                                  1. 1

                                                    Yeah, agreed that there’s a correlation. What occurs to me about this is that this particular correlation is probably strongest for relatively current books, with large publishing runs, but gets weaker as you go back in time and out towards the tail of the popularity / rarity distribution.

                                              2. 1

                                                Agreed, and this is one of the tendencies that set me off from “rationalist culture”. (I guess many folks in said culture found the shredders in Rainbows End totally unobjectionable, but to me they were an element of “mundane dystopia”…)

                                              1. 2

                                                I really wish that this parable helped me understand the underlying conceptual model of Git :(

                                                It’s kind of weird, actually; the author goes through all the stuff that, a priori, seems obvious, but then when I thought he was going to start talking about the actually interesting / confusing / tricky parts (the big one being merging of branches, and various other mostly-branch-related matters), he just sort of… didn’t talk about that at all.

                                                (The explanation of the objects directory and blobs also made my eyes glaze over; it seemed to be sorely in need of an introductory paragraph or two, explaining the concepts behind it; instead, it basically launched right into technical nitty-gritty. But maybe I just need more sleep to deal with this.)

                                                The overall impression I got from this article was that it was written by someone who sees code, per se, as an end unto itself, and who would be mildly surprised, in a detached way, to learn that people sometimes want their code to do things, like be compiled, and even run. But that’s probably unfair, and unduly influenced by my general bitterness about Git.

                                                I remain hopeful that one day, someone will write something that lets me actually understand Git!

                                                1. 2

                                                  Can you tell more about how you find branch-merging confusing?

                                                  I seem to get pretty far with git with the mental model of “it’s a DAG of directory tree snapshots that are identified by their Merkle tree hashes”. There’s Git for Computer Scientists if this is a helpful approach.

                                                  In this model, a branch merge commit is a working tree snapshot with multiple parents, with nothing stranger than that. In practical Git use, merge commits behave differently, since in the Git UI you usually treat commits as diff patch looking things instead of brand new snapshots of the entire directory, and there’s no canonical parent commit to diff against if a commit has multiple parents.

                                                  1. 1

                                                    Can you tell more about how you find branch-merging confusing?

                                                    Not very coherently or concisely, I’m afraid. But—thanks for the link! (Ditto @ alstroemeria313.) I’ll take a look and see if my confusion is dispelled (or at least its nature clarified).

                                                  2. 2

                                                    Merging works by finding all the diffs between now and the branches’ last common ancestor and re-applying them serially, using some heuristics to get them in a sensible order. Does that help at all?

                                                    1. 1

                                                      I… think it might, actually.

                                                      Follow-up questions:

                                                      What are these heuristics? And what constitutes a sensible order? Where can I read about that part of it in particular?

                                                  1. 4

                                                    Why not link to the original at http://www.thagomizer.com/blog/2017/09/29/we-don-t-do-that-here.html ? The tumblr link in the post seems to be just quoting from there.

                                                    1. 1

                                                      Thanks, I wasn’t aware there was a larger post. The original is just a teensy bit more SJ focused than the excerpt version, but I’ll go ahead and swap the link out.

                                                      1. 2

                                                        re: “teensy bit more SJ focused”:

                                                        I actually think there’s a valuable insight here, which is this. The post author says:

                                                        In the world I want to live in, we don’t have to set negative rules like “don’t harass people.” Instead, we could get by with positive guidelines like “be welcoming” and “be kind” and use our giant human brains to figure out how to apply those values to novel situations. When I get the chance, I try to create those spaces. When I have the energy, I try to educate and inform instead of correct. But I still keep this simple phrase in my back pocket as a tool for ending and defusing situations when other approaches don’t work.

                                                        It seems to me that, in fact:

                                                        1. Those “global” approaches don’t work more often than they do;
                                                        2. The “local” approaches do work, and not just in the interim or as patches, but on a long-term basis;
                                                        3. The “local” approaches result in a better “global” state of affairs than the “global” approaches do.

                                                        (Relevant: Scott Alexander’s “Archipelago”; also, actual real-world sovereign countries.)

                                                        It seems to me that—yes, letting go of attempts to use the “global” approach in all cases, and falling back to “local” approaches when need be, is a huge, a vast, improvement. But even better may be to take the next step, and abandon the “global” approach altogether!

                                                        P.S. There’s a big hint, I think, here:

                                                        Instead, we could get by with positive guidelines like “be welcoming” and “be kind” and use our giant human brains to figure out how to apply those values to novel situations.

                                                        The author seems to be (perhaps without fully realizing it herself!) pointing at the notion that starting from really big, really general values, and trying to apply them to novel situations, is incredibly demanding, cognitively / computationally. To demand that people do this is essentially to demand that everyone personally, individually re-derive morality nearly from scratch! This is obviously unreasonable, especially since the nuances of ethical issues often depend on domain- and situation-specific considerations, or complex incentive structures, or introspectively non-obvious features of human psychology, or differences in mental make-up (such as the sort of which Scott Alexander has recently written). It is not obvious to me that the author’s ideal world can exist even in principle; how much less so in practice…?

                                                    1. 1

                                                      Why is that, do you think? I mean, why was Scheme a dead-end in this way?

                                                      1. 2

                                                        Well it’s not completely dead, Clojure is the latest Lisp-alike incarnation and it seems to be doing okay. The Racket language also seems to have a bunch of stuff to it, but the only project I can think of that’s written in Racket is Paul Graham’s Arc language. On the academical side, I think typed functional programming has taken over because the abstractions it provides are more like mathematical structures than the more ad hoc thing you can do with Lisp homoiconicism. Maybe a bit like the more structure-rich category theory taking over from set theory as the preferred formalism for foundational mathematics?

                                                        In practical programming side, Scheme needs a culture and an ecosystem more than other languages. You can build clever things on top of it, but you need to build clever things on top of it to keep your code readable, and if you’re collaborating with other people you need everyone to agree enough on how those things work. If you just take C programmers and give them Scheme, they will start writing C in Scheme syntax, and because Scheme provides so few affordances built-in, the code that gets you is less fun than C written in, say, Python syntax would be. Actually “Python happened” might be part of the story on the hacker side. Peter Norvig has talked about how in early 90s he was a big Lisp advocate, and going on about all the fancy tricks you can do with Lisp that were painfully complicated on other common languages at the time. He said somewhere that at some point in the 90s he noticed that these other people had also started quickly writing short programs doing clever tricks, and they were using the newfangled scripting languages like Python. The big difference is that Python doesn’t provide a macro system for defining entirely new language structures like Lisp does. On the other hand, out of the box it’s easier to write and read Python programs for most common programming tasks.

                                                        1. 2

                                                          Interesting. At some point I really need to learn Lisp, if only to understand what the heck people are talking about when they have these discussions (and possibly to gain the enlightenment than is so often promised from learning Lisp).

                                                          The Racket language also seems to have a bunch of stuff to it, but the only project I can think of that’s written in Racket is Paul Graham’s Arc language.

                                                          Actually, there’s also this: http://docs.racket-lang.org/pollen/ (by typographer Matthew Butterick; more info at https://practicaltypography.com/why-racket-why-lisp.html).

                                                      1. 1

                                                        Which is which? :)

                                                        (Yes, I know I could read the article myself and find out! But as long as you’re excerpting… ;)

                                                        1. 2

                                                          Drexler’s idea of nanotechnology is getting to atomically precise manufacturing via molecular design and synthesis. Nanobots are the fictional misrepresentation of Drexlerian nanotechnology. Nanomaterials is the stuff which people who aren’t Drexler who say they do nanotechnology are doing, and nanoelectric systems (the ongoing technological revolution) are just modern integrated circuits.

                                                          1. 2

                                                            To confirm my own understanding:

                                                            Of the three, one is a revolutionary prospect,

                                                            That would be “atomically precise engineering”…

                                                            one is a fantasy,

                                                            (“miraculous nanobots”)…

                                                            and the third is mostly materials science.

                                                            (“nanomaterials, nanodevices”)…

                                                            As for the other two kinds, one is the heart of today’s greatest technological revolution,

                                                            (“nanoelectronic systems”)…

                                                            while the other is the basis for progress toward the revolutionary prospect

                                                            (“molecular design and synthesis”).

                                                          1. 2

                                                            Aside: the proposed “shallow humor” downvote category seems like it can be included under the umbrella of “spam”. Does anyone disagree?

                                                            Actual spam is written by robots and you can usually instantly tell it has been written by robots, I think it should have it’s dedicated category to keep it utterly no-brainer to report it. And it should be actually reported instead of just downvoted since the next action is to ban the robot, not discourage the human. Though I’m not sure how we’d get a persistent problem of this as long as we stick to the invite system. I’d maybe make the key identifier of spam be “tries to sell something” instead of “is noise”. Then there are also actual humans who are only on the forum to promote their youtube channel that could fall into the bin.

                                                            “Low-effort comment” should be a good name for the dumb jokes category. You could also mention the category of comments that only say something like “Great article.” that might well be written by sincere humans and aren’t actively annoying until you realize that they might as well have been posted by spambots since they could be copied verbatim to 90 % of the posts.

                                                            1. 1

                                                              This is a better distinction, I agree.

                                                              I hesitate to discourage “Great article” too much. Yes, they can be copied verbatim to most posts, but on the other hand, given, again, the invite-only nature of membership, what’s being said is not “someone thinks this is a great article” but “this specific member thinks this a greater article”. And, at the same time, discouraging such comments can lead to an atmosphere where only criticism is visible (as criticism tends to be more specific than praise).

                                                              I think we can deal with that problem if it arises. (The other stuff you said, I’m on board with.)

                                                              1. 1

                                                                Yeah, that part is pretty context sensitive. If you know the user, it’s fine. When you notice that there’s a new user who doesn’t seem to be saying anything except stuff like that, then you might start going hmmm.

                                                            1. 2

                                                              Question: does anyone here have opinions about whether the content of this article can be taken at face value or anything like it? Is HTTP really obsolete? Is IPFS really a good replacement for it? (Bonus question: is this article actually mostly an advertisement for Neocities?)

                                                              1. 2

                                                                So, I discussed this on IRC, but I can consolidate and summarize my thoughts here. I don’t think this article can be taken at face value. At the very least, no browsers yet support IPFS. IPFS support on the server is still nascent. IPFS does not solve the persistence problem any more than Bittorrent does. There will still need to be a server out there “seeding” the content onto the IPFS network in order to prevent it from disappearing. IPFS, in its current form, is a dynamic CDN, which automatically caches and redistributes the most popular data. That’s pretty neat, insofar as it means that you can host content on some cheap shared hosting solution and not worry about a frontpage post on Reddit or Hacker News melting your site. But without some solution to the incentive problem of getting people to host content they’re not interested in, I don’t see IPFS replacing HTTP as it currently stands.

                                                                1. 2

                                                                  I can’t vouch for IPFS as the specific implementation, but think that a named data networking infrastructure for accessing static data is an obvious missing piece in the current internet infrastructure. The new wrinkle here is that this would mean splitting internet pages into things that can never change (a specific image, an archived web page) and addresses to things with changing content (an ongoing comment thread, the front page of a news site). Currently we treat everything as the second type of thing, which makes it easy to not think about things that need to change as a separate category, but also make link rot happen.

                                                                  The big draw for an IPFS style system for me is that it would make personal archive caches much more automatable. Static websites from years ago and miscellaneous PDF documents could be indexed with hash identifiers that are guaranteed to fetch the correct content if it is still somewhere to be found.

                                                                  1. 2

                                                                    What would be really interesting would be to see IPFS merge with filesystems, so that there is literally no distinction between your local cache and the network. When you (the user) request data, your computer fetches it, transparently, regardless of whether it was stored locally or available on the Internet. It would be the apotheosis of Sun’s old motto, “The network is the computer.”

                                                                Recent Comments