1. 2

    … the next time the US has to fight a conventional war

    Is that considered likely anytime soon? When was the last conventional war? What will the next one be?

    1. 1

      The last conventional war the US fought was the opening round of Operation Iraqi Freedom (the “shock-and-awe” phase of the current Iraq war). But that almost doesn’t count, because it was the US fighting at the near peak of its power against an Iraqi army that had never fully recovered from the massive defeat inflicted upon it by the Gulf War.

      The last war the US fought against a functional military was the Gulf War, in 1993. While today, we look upon the US victory in the Gulf War as an inevitability, we must remember that it was a huge surprise when Saddam’s army and air force crumpled after just over two weeks of sustained combat operations. Saddam’s military, at the time, was the fourth largest in the world, and was combat hardened after the almost decade-long Iran/Iraq war of the ’80s. It was also far better equipped, relative to the US, than the North Vietnamese military had been in the ’60s. As a result, there were many who feared that the US was getting into another quagmire by attempting to liberate Kuwait in 1993. Instead, the world was treated to the largest military surprise since the German invasion of France in 1941. Never before had a military that large been destroyed so thoroughly in so short a time.

      However, 1993, at this point, is 15 years in the past. While the US military has been distracted by the twin insurgencies of Iraq and Afghanistan, other militaries (notably China and Russia) have been studying the conditions behind US military’s victory in the Gulf War and have been coming up with ways to counter the advantages that the US displayed in that conflict. A distributed system like passive radar is one such way. Other ways include the usage of so-called “hybrid war” tactics displayed by the Russian military in Crimea, Donbass, and, lately Syria. China has been developing a system of tactics and technologies that fall under the umbrella of “anti-access/area-denial” (A2AD), which focus on keeping the US Navy at a distance, in order to secure Chinese control over the South China Sea and potentially keep the US from coming to Taiwan’s aid if China should choose to invade Taiwan.

      The US military, in my estimation, is much like the British army prior to World War 1. The British military, riding high after its victories in the Napoleonic wars and the Crimean War, was confident in its own abilities and sanguine about the perceived weakness of its adversaries (rising Germany and the fading empires of Austria-Hungary and the Ottomans). As a result, the British military focused on fighting “brushfire wars” against native rebellions in Africa, India and Afghanistan (lol). Because of this, the British army found itself ill-prepared and under-equipped to fight a “high-intensity conflict against near-peer adversaries” (to use the words of modern military analysts).

      There is a school of thought that says that conventional war against near-peer adversaries is impossible, because any such war would inevitably escalate to nuclear weapons. I actually wish this argument was true. It would simplify our threat assessments greatly. However, whenever I hear this argument, I am reminded of all the arguments prior to World War 1 or World War 2 that suggested that a major war was impossible because of the unprecedented destructive capabilities of modern weapons.

      Do I think a war is likely? At this moment, I do not. Can I tell when the next war is going to occur? If I could, I would be working for the CIA, NSA or DoD, with a top-secret clearance. What I do have is a vague sense of unease. This sense of unease comes from the fact that the world today is multi-polar and unstable. The US military no longer enjoys the unchallenged hyperpower hegemony that it had at the end of the cold war. Yet, it still acts and fights as if it does. I also remember that wars, when they do occur, can stem from causes that are extremely surprising at the time. Who would have thought that the assassination of the crown prince of Austria-Hungary by a Serbian nationalist would lead to a war that resulted in the deaths of millions?

    1. 2

      I haven’t always agreed with Lanier, but he’s spot-on in pretty much everything he says in this piece.

      1. 2

        So the first time I read this post, what struck me was that I think what would be really interesting is genuine works of computer science fiction. For example, there is a dearth of Science Fiction exploring HCI concepts. Probably the most interesting I can think of off the top of my head is Dennou Coil, which is in fact excellent.

        We need more science fiction stories that seriously take into account different ways of using computers, where that isn’t incidental to the main plot.

        1. 2

          exploring HCI concepts

          Dennou Coil

          Tell us more about it does this!

          1. 2

            Well, it’s kind of hard to explain Dennou Coil in a few lines I think. Trying to compress it down to a sentence: Dennou Coil is an anime that presents a plausible subculture that could develop with unobtrusive Alternate Reality glasses. And it’s very much presenting the concept, worldview and all from a childs perspective. I think this is actually a sort of hidden genius because in modern media it’s very fashionable to focus on the dangers and the deviants, you know what if someone uses Google Glass to record you at the beach. And while that kind of issue gets touched on in Dennou Coil, it’s an adult sort of concern that’s relegated to the background.

            What results is this sort of interesting mesh between superstition and science fiction that treads a very fine line between possibility and outright fabrication. The aesthetic theme of Dennou Coil is basically the Missingno glitch in the original pokemon games, dangerous behavior of a complicated technology that the agents experiencing it simply aren’t in a real position to understand. So instead they understand it through the traditional ways of human understanding, stories and rumors and myths. In that sense, it’s a deeper meditation on HCI than say just thinking about the raw mechanics of how something should work to be ergonomic. Perhaps it’s best summed up as being Human(s) with Computers Interaction rather than a focus on the individual ‘user’.

        1. 3

          To put this in more rationalist terminology, one of the ways around Goodhart’s law is that you find a measure… but keep it secret. Then, so long as no one else figures out the measure, you can target it without the measure being corrupted by people gaming the system.

          The problem is that the measure will inevitably become corrupted because people aren’t that dumb. I have an anecdote from a friend who worked at Subway. Subway implemented its own version of the lime equation - except instead of counting limes, they’d count bread. So you couldn’t get a free sandwich, because that’d make the bread count come up short at the end of the day. But, you could order a 6-inch veggie, and end up with a footlong deluxe and the tracking system would miss that.

          Another example is college admissions. Good colleges don’t publicize their “lime equations” determining how they determine who gets a spot and who doesn’t. But people reverse engineered the process anyway, simply by using other statistics about the student body that were published by the campuses. So, at this point, it doesn’t really do any good for colleges to keep their equations secret - all that does is disadvantage the people they’re trying to help the most.

          If a metric is high-stakes enough, it will be targeted, whether its secret or not, because no matter how secret the metric is, targeting it will have effects on the real world, and those effects, given time, will be observed and reverse-engineered.

          EDIT: The e-mail volume metric he mentions is exactly the sort of thing that’s easy for employees to game and corrupt. Given time, people will figure out that the people who send a lot of e-mails are the ones the boss favors. Then everyone will send lots of e-mails all the time, regardless of whether they’re getting any work done or not, and the advantage of the metric will disappear.

          1. 1

            This is essentially the objection I was going to post, so I entirely agree with it.

            Relevant to this is this discussion on LessWrong (1.0) about eigenkarma and Google’s PageRank algorithm. Some key quotes:

            Ilya Shpitser:

            This won’t work, for the same reason PageRank did not work, you can game it by collusion. Communities are excellent at collusion.

            Oliver Habryka:

            “This won’t work, for the same reason PageRank did not work”

            I am very confused by this. Google’s search vastly outperformed its competitors with PageRank and is still using a heavily tweaked version of PageRank to this day, delivering by far the best search on the market. It seems to me that PageRank should widely be considered to be the most successful reputation algorithm that has ever been invented, having demonstrated extraordinary real-world success. In what way does it make sense to say “PageRank did not work”?

            Ilya Shpitser:

            Google is using a much more complicated algorithm that is constantly tweaked, and is a trade secret – precisely because as soon as it became profitable to do so, the ecosystem proceeded to game the hell out of PageRank.

            Google hasn’t been using PageRank-as-in-the-paper for ages. The real secret sauce behind Google is not eigenvalues, it’s the fact that it’s effectively anti-inductive, because the algorithm isn’t open and there is an army of humans looking for attempts to game it, and modifying it as soon as such an attempt is found.

            Wei Dai:

            Given that, it seems equally valid to say “this will work, for the same reason that PageRank worked”, i.e., we can also tweak the reputation algorithm as people try to attack it. We don’t have as much resources as Google, but then we also don’t face as many attackers (with as strong incentives) as Google does.

            Ilya Shpitser:

            It’s not PageRank that worked, it’s anti-induction that worked. PageRank did not work, as soon as it faced resistance.

            1. 2

              Thinking more on this, it seems like this is the takeaway:

              Goodhart’s Law is not so easy to outsmart. There are no silver bullets. As usual, there are only a whole lot of lead bullets, and those bullets look like this:

              1. Be really clear on what you are actually trying to maximize.

              2. Find a good and non-obvious measure of that thing.

              3. Keep that measure secret.

              4. Even as you use that measure, never forget what actual thing you’re after.

              5. Keep looking for new, non-obvious measures of that thing.

              6. Abandon the old measure and switch to a new and different one as soon as you detect even a hint of gaming.

              7. Repeat steps 2–5 indefinitely, forever.

              This is hard. It is not a one-stop solution. You can’t ever stop working on this. It requires continuous effort, possibly even continuously increasing effort (as new measures are probably going to be harder and harder to find).

              But I don’t know that there is any alternative.

          1. 1

            This is excellent. It seems to me like another iteration of Stallman’s “treacherous computing” concept.

            1. 1

              Either the title or the abstract is misleading.

              Empirical support for acquiescence suggests that the researchers established that people do in fact do this.

              An empirical case for acquiescing suggests that the researchers demonstrated that it is correct for people to do this.

              Is this paper making a descriptive claim? A normative claim? Both?

              1. 2

                It’s making a descriptive claim. I’m not sure why they wrote the title the way they did.

              1. 1

                At the end of the paper, Shepard also speculates on the subject of xeno-evo-psych, so to speak:

                I conclude with a brief consideration of the sense, if any, in which a tendency toward the properties of color representation I have been considering may be general—or even universal. Throughout, I have spoken of the possibility of selective pressures to which highly visual terrestrial animals may be subject in the terrestrial environment. I used the qualification “terrestrial” because conditions of illumination in other, for exam-ple, aquatic environments can be quite different. Owing to the already noted selective absorption by water of the longer wavelengths, with increasing depth in a marine envi-ronment, the available solar light, in addition to being progressively reduced in overall intensity, becomes progressively restricted in spectral range to the shorter wavelengths. This blue shift and compression in range of available wavelengths is known to be matched by a corresponding blue shift and compression in the range of spectral sensitivity of deeper dwelling marine animals (Lythgoe, 1979).

                It does seem to me, however, that the wavelength dependencies of the variable height of a sun, presence of atmospheric aerosols, and direct versus indirect illumination might apply quite generally on the surfaces of planets capable of supporting the evolution of highly visual organisms. Such a planet is presumably likely (a) to circle a long-lived star that emits a stable, broad range of wavelengths, (b) to undergo regular rotation about its own axis (owing to the conservation of angular momentum), and (c) to possess an atmosphere that differentially filters the wavelengths of direct and scattered light depending on the (rotationally determined) angle at which the light enters the atmosphere and the size distribution of atmospheric particles. Moreover, the arguments for categorical representation of colors based on memorability and shareability do not depend on particular features of the terrestrial environment. In short, just as there may be universal selective pressures toward mechanisms for the representation of three-dimensional space and for an exponential law of generalization (Shepard, 1987a), there may be quite general selective pressures toward mechanisms for the representation of the surface characteristics of objects in a low-dimensional (perhaps even a three dimensional) color space, with a circular component of hue, and a categorical structure.

                Possibly, behavioral and cognitive theorists should aspire to a wider scope for their science. An evolutionary theory of mind need not confine itself to the particular minds of the more or less accidental collection of species we find on planet Earth. There may be quite general or even universal principles that characterize planetary environments capable of supporting the origin and evolution of increasingly complex forms of life. If so, there may be corresponding general or even universal principles of mind that by virtue of their mesh with the principles of these environments, are favored by a process of natural selection wherever it may be taking place.

                1. 2

                  Excellent essay!

                  I’ve got a comment and a question..

                  Comment. See also: PHP vs. insert your favorite web-app-appropriate language here.

                  Question. Does this pattern mean that no financial incentives exist that are strong enough to overcome this individual-hacker independence? Can the power of Lisp not be harnessed, by a group, for profit? Why not?

                  Edit: Perhaps the answer is here:

                  The Lisp Curse does not contradict the maxim of Stanislav Datskovskiy: Employers much prefer that workers be fungible, rather than maximally productive.

                  The author attributes this to the “venality” and “close-mindedness” of managers, but I wouldn’t be so quick to dismiss it. The expected productivity of an employee, after all, is his productivity × his reliability… and this is exacerbated by dependence of a team on any given employee’s work, etc. Selecting for fungibility preferentially to maximal productivity seems to me to be nothing more than perfectly rational optimization on a manager’s part.

                  (Of course this is different from employees whose main value is to be sources of ideas, etc., such as in research labs and so forth, or for designers (especially design team leads), etc.—but that’s hardly the median case, and anyway is irrelevant to the subject of programming languages.)

                  1. 3

                    I don’t know how much of a case this is with Lisp, but a lot of Forth programmers say that while Forth makes for excellent productivity for a capable single programmer, it’s often quite impossible for any other Forth programmer to work on the first one’s program. The programs end up idiosyncratic private languages that are impenetrable without deep study. So it’s less about workers not being maximally fungible but possibly not being able to hire any new productive workers at all for an existing system.

                    Maybe the whole programming system should be set up in something like this with a new kind of module system. The inside of a module is full of maximally productive weird hyper-expressive language shenanigans, and then somehow there’s a mutually comprehensible interface layer between the modules so that the software can be assembled from these interacting fiefdoms. That’s sort of how Unix and Unix programs worked, with the interface layer being plaintext streams. I’m not quite sure what this should ideally look like. The current libraries and APIs approach isn’t good enough, you’d want to be able to exchange structures, idioms and sublanguages, not just provide a black box with buttons to push. Also you would want to avoid boxing single components in a large system into suboptimal architectures by imposing too many constraints from the surrounding area, this part gets really hard.

                    Maybe Alan Kay is on to something with the whole “cells receiving chemical signals and deciding what to do, not circuits receiving electrical impulses and reacting with lockstep determinism” metaphor he’s been trying to push for the last 40 years. (Also sorta related, Functional in the small, OO in the large.)

                    1. 1

                      Maybe Alan Kay is on to something with the whole “cells receiving chemical signals and deciding what to do, not circuits receiving electrical impulses and reacting with lockstep determinism” metaphor he’s been trying to push for the last 40 years.

                      Do you have any links/references about this? It sounds really interesting, and I’ve not heard of it before!

                      1. 1

                        There’s a bit of it in the Programming and Scaling talk that was linked here a while ago.

                        Some emails about the origin of OO, more on messaging.

                      2. 1

                        By the way, “Functional in the small, OO in the large” is a big part of how Swift works. (Well, how I write Swift, anyway. Non-functional (heh) approaches are certainly also available.)

                      3. 3

                        Does this pattern mean that no financial incentives exist that are strong enough to overcome this individual-hacker independence?

                        Paul Graham:

                        So you could say that using Lisp was an experiment. Our hypothesis was that if we wrote our software in Lisp, we’d be able to get features done faster than our competitors, and also to do things in our software that they couldn’t do. And because Lisp was so high-level, we wouldn’t need a big development team, so our costs would be lower. If this were so, we could offer a better product for less money, and still make a profit. We would end up getting all the users, and our competitors would get none, and eventually go out of business. That was what we hoped would happen, anyway.

                        What were the results of this experiment? Somewhat surprisingly, it worked. We eventually had many competitors, on the order of twenty to thirty of them, but none of their software could compete with ours. We had a wysiwyg online store builder that ran on the server and yet felt like a desktop application. Our competitors had cgi scripts. And we were always far ahead of them in features. Sometimes, in desperation, competitors would try to introduce features that we didn’t have. But with Lisp our development cycle was so fast that we could sometimes duplicate a new feature within a day or two of a competitor announcing it in a press release. By the time journalists covering the press release got round to calling us, we would have the new feature too.

                        It must have seemed to our competitors that we had some kind of secret weapon– that we were decoding their Enigma traffic or something. In fact we did have a secret weapon, but it was simpler than they realized. No one was leaking news of their features to us. We were just able to develop software faster than anyone thought possible.

                        1. 2

                          Yes, I’ve read that essay too, but—why only that example? Where are the others? If Lisp is so good, why isn’t everyone using it? Surely the fact that Paul Graham, of all people, used it, to make money, and now sings its praises, ought to spur a whole host of people to check it out, discover it, put it to use, and profit thereby? Why isn’t it way, way, way more popular?

                          1. 1

                            It seems like when people try to use it, they find the ecosystem nasty. The very visible classic example of people buying pg’s Lisp evangelism, actually building a thing in Lisp and getting burned was the original Reddit. Another report of a reasonably capable newcomer trying to grab onto Common Lisp and bouncing off was Steve Yegge. It seems like you really need people building a common ground of solid libraries nowadays, and can’t really build solid stuff in an ecosystem where everybody has their own 80 % solution and the venerated standard solution was informed by 1980s computing practices.

                          2. 2

                            I have issues with that essay. Paul Graham never seems to acknowledge the selection bias that using Lisp imposes. It’s entirely possible that the reason ViaWeb did well isn’t because it was written in Lisp, but because it attracted the sort of people who learn Lisp. It’s entirely possible that if he’d attracted those same programmers, but somehow managed to get them to use Perl, he’d have been just as productive.

                            Lisp was not (and still is not) a “mainstream” programming language. That means you have to go out of your way to learn Lisp. The sorts of programmers who go out of their way to learn obscure programming languages and libraries are more likely to be more productive (or, in Silicon Valley terms, “passionate”) programmers who would be able to bring that additional productivity to bear no matter what language they were using.

                            1. 1

                              Selection bias seems like it could explain a lot of this puzzle.

                              The question is, do we find the same effect for other, comparably obscure / non-mainstream languages?

                        1. 2

                          Ok, I just got around to reading this. I only have one comment:

                          It’s amusing how Gearbox has accidentally stumbled onto creating one single, minute, narrow instance of something that Blizzard cranks out, with casual ease, over and over, in a steady stream for years and years on end.

                          (In other words: yeah, some people are well aware of all of these principles—which is why those people have subscriber numbers in the double-digit millions, and huge, Olympus-sized mountains of cash.)

                          1. 2

                            Yup. It’s funny too because he talks about fun and addiction in this essay, and kind of starts to conflate them a bit at points. I was especially annoyed with his admonition that “fun isn’t enough”, because my favorite game at the time was Halo 3, and I thought Halo 3 was a lot of fun in a way that MMO’s rarely were. Moreover I felt that I could explain the principles of what made Halo 3 fun precisely because I’d had so much experience tweaking the experience through forge. So I’d always wanted to write a response essay about what makes something fun, but never quite got around to it.

                            Maybe I should, since you said you’d like to see more of my Halo stuff.

                          1. 3

                            Related: https://weblogs.asp.net/alex_papadimoulis/408925

                            Alex Papadimoulis brings up an analogy to a carpenter building a shelving unit asking, “Which is best to drive nails? And old shoe or a glass bottle?” Advice at the operational level would be to compare and contrast the characteristics of shoes and glass bottles and try to determine which would be best to drive nails in this circumstance. Advice at the tactical level would be to say, “Go the hardware store and buy a hammer.” Advice at the strategic level would be to ask, “Why are you using nails here?” And finally, advice at the mission level would be, “Why are you even building a shelving unit?”

                            1. 3

                              I agree with this, and yet I also object.

                              Half of the reason I object is outlined in this comment. Here’s the other half.

                              Sometimes, doing something “the actual right way” instead of “the best way, given the constraints of the current advice level”, has costs.


                              “Which is best to drive nails? An old shoe or a glass bottle?” “Go to the hardware store to buy a hammer.” “Yes, very good, of course a hammer is best, but there are zombies outside, and they will eat me if I leave the house. Now, about my question?”

                              The equivalent of the zombies, in the context of software engineering, is the cost of a redesign, of switching, of re-learning, etc.

                              Take a real-world example. Suppose someone looked at OborWiki and said to me (apropos of me asking them for some advice on implementing a new feature, perhaps): “Obormot, your wiki system is built on PmWiki, which runs on PHP. But PHP is a terrible language. My advice to you is that you should switch to Rust.”

                              Is this useful advice?

                              How much effort would it take to “switch to Rust” (or to anything else)? I’d have to re-implement the entirety of the PmWiki software (which has been in active development for a decade and a half), AND all the third-party features, AND then I’d have to… but it doesn’t matter what I’d have to do next, this is already half a lifetime’s work, and for what, exactly? Well, it would be more correct / good / optimal. Ok. Obviously this is absurd advice, and the only sensible response is a dismissive eyeroll.

                              Certainly you might say “if only you’d done it the right way from the start…”—but as usual: a lesson is learned, but the damage is irreversible.

                              (Note, to pre-empt an obvious but flawed response: this is not a question of sunk costs! It’s a question of what it costs, going forward, to switch to “right” approach on a higher advice level. Often, the cost is so high that the actual choice is “continue to optimize on the current advice level only” vs. “abandon the entire endeavor”.)

                              1. 2

                                Also, re: Papadimoulis’s specific question on what sort of response to such questions is more useful:

                                The operational-level advice may be problematic for exactly the reasons he states. But the answer he gave (or would give) is useless! It’s nothing more than “UR DOIN IT RONG” in more words.

                                Ok, so that’s a bad way to store things in a database. Why? In what way? What’s a better way? What is even the category of the problem? Come on, man! Don’t just say “RTFM!!”—give some pointers; some keywords to google, some vague gesturing in the direction of the thing topic, something!

                                That sort of “ur doin it rong” response is very hard to distinguish from noise, and even if the advisee takes it to heart, it’s not very actionable; “read a database book”? who has time for that? Yes, that’s the ideal way to go, but if you require someone to read a whole database book before they can proceed on their current task, then they just won’t read anything and will proceed anyway. Whereas if you point them in the direction of some more specific advice, well, then they might actually learn something and improve.

                              1. 2

                                Here’s the converse of the caveat in my other comment:

                                The lower the level, the more objective questions of optimality are.

                                At the operations level, optimality is provable/demonstrable. At the mission level, optimality is largely a matter of values. The strategy and tactics levels are in-between. (Externalities are a large part of why tactical and strategic issues cannot be optimized in a totally objective way.)

                                This means that sometimes you may get advice on level N+1, which is inapplicable despite the fact that it does indeed offer a “better” way of achieving your level N+2 objectives. What you’re looking for really is level N advice, not “any advice that helps me achieve my top-level goals”.


                                “How many wheels should my car have?” “Don’t build a car; they pollute, cause traffic, and are inefficient.” This is almost certainly useless advice; optimality on the mission level, in this case, is strongly entangled with various values, much broader-scope notions of optimal arrangements of society, etc.

                                This suggests heuristics for advice-giving and advice-taking:

                                For advice-giving: make sure that you understand your advisee’s values / preferences / views on externalities / etc., as these are less changeable than mission-related decisions at any level, but often intrude on the latter.

                                For advice-taking: make it clear to your would-be advisor what is the highest level of advice you’re interested in, and why (i.e. make it clear which options on the level(s) above that you have investigated and rejected, i.e. what your constraints are).

                                1. 2

                                  By the way, this Liz England’s website has some really interesting other material! Her Game Design Library, and reviews, are gold; I definitely recommend checking them out.

                                  1. 2

                                    Excellent article!

                                    Composer: “I created a theme song for the door.”

                                    … lol.

                                    Anyway, coming at this from another perspective: this is actually a fascinating illustration of a huge difference between computer RPGs (or games, in general), and TTRPGs. Almost all of these problems don’t exist in TTRPGs! The reason’s obvious—they’re handled by a human.

                                    What this makes me wonder is—what would it take to build a computer system that would handle the “door problem” as automatically / seamless as a human GM? Is this an AI-complete problem?

                                    1. 2

                                      Replying to my own question, after thinking about it over a cup of tea:

                                      A big part of the reason this can’t be automated is that some aspects of the door problem are design decisions, rather than implementation decisions. And the reason they can’t be automated is not a matter of coherent understanding or modeling of the world, it’s creative control.

                                      Taking just the first four questions listed:

                                      • Are there doors in your game?
                                      • Can the player open them?
                                      • Can the player open every door in the game?
                                      • Or are some doors for decoration?

                                      These are things that must be decided by the person / team / entity who is designing / running the game; and these are decisions that have to do with what the design goals are, what the vision for the game is.

                                      Whether we can build an AI that can make such decisions is, perhaps, an interesting question (it seems to be both trivially true, and also difficult to operationalize, since the real difficulty seems to me to be one not of answering these questions—they’re mostly yes/no, nothing too special—but of asking them).

                                      But if we do, deciding to delegate such questions to the AI means surrendering / delegating creative control; which we may indeed choose to do, but that would in fact be what we’re doing, in such a case. And the requirement for an entity to which we can delegate creative control of a game (or similar project) seems to be: can this entity understand (or behave as if it understands) our design goals, our creative vision? If not—then it can’t make design decisions.

                                    1. 2

                                      Good essay. I would add this caveat:

                                      Sometimes, someone asks you for advice of level N. However, what you know, and what the one who asks you either does not know or does not want to acknowledge, is that no advice of level N will suffice, for their situation; the flaw in their approach is on level N+1.

                                      (Example: “The wheels aren’t holding the car up; what sort of bolt should I use to ensure that they hold?” —when the problem is that the car has three wheels instead of four; no kind of bolt will fix that problem.)

                                      Such cases are difficult. You know that no advice you give on level N will work, but no advice you give on level N+1 will be accepted.

                                      Being open to being told that this is the case, is, I think, a critical part of being rational.

                                      1. 2

                                        Yes. That is an excellent succinct summary of one of the major things I wanted to get across with this essay but didn’t state explicitly within it.

                                        Do you mind if I add it in directly to the wiki page?

                                        1. 1

                                          Go for it.

                                      1. 2

                                        That can’t be the right version of the image, right? It’s way too tiny to read anything…

                                        Edit: Re-hosting full-size image here: http://share.obormot.net/images/So-Good-They-Cant-Ignore-You.png

                                        1. 1

                                          So, this is one of those interesting mixes of “hopelessly idealistic but totally correct” and “insane” that you sometimes get from people who are contrarian, principled, technically oriented, and confused about what things are for and why people do things.

                                          Let’s dive in.

                                          I – Obeys operator

                                          Yes. No quibble with this one. It’s inexcusable that this is ever not the case.

                                          II – Forgives mistakes

                                          This is absolutely the ideal, but note that implementing this would require a Herculean effort, all the more improbable because it would be distributed across many people, companies, etc. (For a roughly comparable, if smaller-scale, thing, see the Mac OS’s pervasive, system-wide, everything-interoperable-with-everything copy/paste and drag/drop support; to start from the idea for NSPasteboard and get to anything resembling today takes 20 years and 20 system versions, of course with various partial or weaker implementations along the way, etc.)

                                          III – Retains knowledge

                                          This seems to require that no large (multi-gigabyte) data ever be loaded wholly into RAM, nor operations on such data ever be conducted in RAM. How would following this law not result in a tremendous downgrade in performance?

                                          IV – Preserves meaning

                                          This is unrealistic. No closed-source software? Ok, good luck accomplishing anything in the real world.

                                          V – Survives disruptions

                                          Who the heck wants to interactively debug crashes? Also, what on earth is “an error condition requiring the operator’s intervention”?

                                          VI – Reveals purpose

                                          Again with the debugger… Well, this is an ok ideal, but it seems largely irrelevant to me, for most people in most cases.

                                          VII – Serves loyally

                                          This I agree with wholeheartedly.

                                          Here’s the crux of the matter:

                                          Most people aren’t programmers and don’t want to be.

                                          Any ideal, or plan, or design, that asks users to be programmers, or expects that they’ll be programmers, etc., is fundamentally confused and doomed to failure, as well as actually not being “ideal” in any meaningful sense.

                                          1. 1

                                            It’s important to note that the whole ‘debug any crash’ thing comes from Lisp Machines. People who used lisp machines really liked that they could in principle recover from any error if they wanted, even if in practice they might decide to just let the thing go. It’s also where things like needing access to the source code come from, as the Lisp Machine allowed introspection abilities that were simply beyond what’s available on a modern *nix based system.

                                            You can read more in this vein in the Unix Haters Handbook:


                                            1. 1

                                              My handwavy thought about the forgives mistakes / retains knowledge thing is that if we stick to human workstation use instead of scientific computing and similar data crunching, the amounts of data involved in something like video editing can be huge, but the data to describe the user inputs that led to the state should be small enough to serialize without much effort. So for the video editing case, you’d need to treat the source video file as sacrosanct, save user inputs, and then have the user operations be implemented in a way that they can be reversed or just re-walk the input sequence up to the earlier point from which you’d want to resume (which might be slow). Baking this so deeply into the OS+programming language layer that doing things like this would be a no-brainer for any new industrial 3D modeling or photoediting application you’d develop would of course be an interesting engineering exercise.

                                              This wouldn’t get you quite the “up to the current state as fast as turning on a lamp” speed, but replayable operations would let you keep large complex state in RAM and be able to have the machine load the starting state and the operation sequence from non-volatile memory and then play back the operations to get you to your current state.

                                              No closed-source software on client machines would still let you do business with proprietary software, you’d just need to run the bits you want to stay hidden on your own server. This part is hit hardest by the thing where regular users don’t want to be programmers. Users are going to notice if you can turn the machine off and turn it on and it starts instantly with all the stuff you had on it earlier, and they’ll expect it to keep doing that. They won’t notice or care if the software code has been piped through a code obfuscator.

                                            1. 3

                                              Okay so besides being generally cool, my question with this post is:

                                              It cost the guy $9000 or so to build what is essentially a movable dwelling, and it looks quite cool. What I naturally think of then is what are the economics of if everyone decided to do this?

                                              As in, right now the going price for a home is hundreds of thousands of dollars. What exactly is the thing that makes building a home so damn expensive that this bus leaves out? Would the cost of busing go up if more people decided to do it? My candidates would be:

                                              • Living Space
                                              • Electrical Wiring
                                              • Real Plumbing
                                              • Real Kitchen
                                              • (EDIT) Laundry
                                              1. 3

                                                Those are all pretty important things!

                                                Also, a house can house a whole family. Can you imagine living in that bus, as a family? That is hellish. Basically, “what are the economics if everyone lives in dorms / Soviet communal apartments”? Interesting abstract economic question, not remotely realistic actual scenario.

                                                Edit: Also, it looks massively claustrophobic. I can see having a panic attack within a day of living there.

                                                It’s probably relevant that there’s no talk, on that page, of anyone actually deciding to live there, much less actually going and living there for any non-trivial length of time.

                                                Like, this is basically a trailer home. Trailer homes are a thing you can buy. How much do they cost? Is living in a trailer a comfortable or an uncomfortable experience? That’s a thing you can do, right now—buy a trailer home and live in it. Do people do that? Yeah. Do people do that who aren’t forced to do so by inescapable financial circumstances? No. Why not?

                                                1. 2

                                                  Ahhh, it’s the family bit I hadn’t considered.

                                                  I should probably admit that building such a monstrosity and then driving off with it in one form or another has been one of my not-so-guilty fantasies for a while. I’m not sure it would really be feasible but damn is it attractive in many ways.


                                                  This article points out that this kind of thing probably only works in a suitable climate: https://www.consumerismcommentary.com/extreme-frugality-living-out-of-your-car/

                                                  As for not realistic, yeah I probably shouldn’t have said ‘everyone’. What I really meant was something more like “If say, 20% of the population decided to do this, what would happen?”

                                                  1. 2

                                                    Trying to partially answer my own question in the spirit in which it’s posed:

                                                    This kind of lifestyle implicitly relies on some gray market economics that only remain viable for as long the number of people freeloading is small. For example:

                                                    • ‘Public’ Bathrooms
                                                    • Parking space

                                                    Are both things that people doing this generally squat or take advantage of in ways that aren’t intended behavior. Once anything like this became remotely common those resources would start to become more locked down, and with the increase in people trying to utilize them the remaining spaces would become more crowded, causing even more previously ‘open’ spaces to lock down further. Ultimately people would probably have to pay for the land they’re using one way or another, and I don’t really know what the specific dollar amounts look like on that but with the sheer number people suddenly having demand I would expect it to be substantial enough to offset a lot of the benefit you’re paying for by giving up easy access to sane plumbing.

                                                    1. 1

                                                      Trailer parks are already a thing in the US, aren’t they?

                                                1. 1

                                                  This is excellent.

                                                  Nitpicks / requests-for-clarification / discussion-starters follow:

                                                  1. The stuff in “Remove Activation Barriers” seems more like “overcome activation barriers”. “Just do a little” is a good technique, no doubt, but it seems like ways to actually remove activation barriers is still something this list could use?
                                                  2. Precommitments—how to make them binding?
                                                    • Accountability is good; are there others?
                                                    • What are ways in which accountability can fail to act as a binding mechanism? I can think of several: a. Not discussing the thing with the other person(s) again b. Incentive to make promises you don’t keep, due to getting social status from making the promises c. Forgetting, then other people also forget
                                                  3. Routines—how to stick to them? (Special-case question: what if there are disincentives to doing so?)
                                                  4. “Quantify and measure”—
                                                    • How to avoid Goodhart’s Law?
                                                    • “Make measurement low/zero cost” is an excellent idea
                                                  5. “procrastination is actually unpleasant”—but might be less unpleasant than “ugh fields”—have to learn to identify them! (But what if it’s difficult/impractical to do anything about them at the moment?)
                                                  6. “divide yourself into a principal and an agent. “ etc.—how???
                                                  7. “Cope with the psychological effects of random reward levers by dealing with them at the same time each day.”—what is this a metaphor for…?
                                                  8. “Establish a physical location for doing work that is comfortable but not distracting, one that encourages relaxed alertness.”—how?
                                                  9. Last two sections are especially excellent (at least they seem very useful to me personally)
                                                  1. 1

                                                    Where is this from?

                                                    Someone put this on a wiki!

                                                    1. 3
                                                    1. 3

                                                      I think this is a high value problem for rationalists to solve, given that many problems would be much easier to reason about if you knew about all the prior approaches to the problem that had been tried and discarded. If information disappears or becomes inaccessible, that becomes impossible, and you no longer find yourself able to stand on the shoulders of giants. However, I don’t see very many other people talking about the problem of information preservation, and those that are don’t really seem to be doing a good job of it.

                                                      Actually, this is one of my major interests.

                                                      Why do you think I insisted that all the stories on Whistling Lobster be especially well indexed?

                                                      Why do you think I was so enamored when it turned out that the Lobsters software sends all submitted stories to archive.is?

                                                      Why do you think the feature I decided was most important for me to try and work on was search, of all things?

                                                      I’ve participated quite a bit in stuff like Archive Team precisely because I think that we need to be preserving knowledge now before we realize we want it later.

                                                      As for how to get organizations to do this, I don’t know. I’m currently pursuing an internship in which I try to document a lot of institutional knowledge in a white paper. I think that ‘person who preserves things’ is one of those roles that is probably unnamed in the business context.

                                                      1. 3

                                                        Why do you think I was so enamored when it turned out that the Lobsters software sends all submitted stories to archive.is?

                                                        It doesn’t seem to automatically send the links though, it just provides the search bookmark. If you check on the ‘cached’ link for this story for example, there are no results on archive.is.

                                                        EDIT: Is it even possible to automatically archive stuff there given how it’s constantly making human visitors solve captchas?

                                                        1. 2

                                                          Confirmed that the caching isn’t automatic, but I just saved the link for this story, and didn’t have to solve and captchas. Perhaps it is automatable, after all?

                                                          1. 1

                                                            This should now be fixed.

                                                          2. 1

                                                            Oh dear, that’s really unfortunate. I’ll have to work on that then. Thanks for pointing it out.

                                                        Recent Comments