1. 2

    … the next time the US has to fight a conventional war

    Is that considered likely anytime soon? When was the last conventional war? What will the next one be?

    1. 1

      The last conventional war the US fought was the opening round of Operation Iraqi Freedom (the “shock-and-awe” phase of the current Iraq war). But that almost doesn’t count, because it was the US fighting at the near peak of its power against an Iraqi army that had never fully recovered from the massive defeat inflicted upon it by the Gulf War.

      The last war the US fought against a functional military was the Gulf War, in 1993. While today, we look upon the US victory in the Gulf War as an inevitability, we must remember that it was a huge surprise when Saddam’s army and air force crumpled after just over two weeks of sustained combat operations. Saddam’s military, at the time, was the fourth largest in the world, and was combat hardened after the almost decade-long Iran/Iraq war of the ’80s. It was also far better equipped, relative to the US, than the North Vietnamese military had been in the ’60s. As a result, there were many who feared that the US was getting into another quagmire by attempting to liberate Kuwait in 1993. Instead, the world was treated to the largest military surprise since the German invasion of France in 1941. Never before had a military that large been destroyed so thoroughly in so short a time.

      However, 1993, at this point, is 15 years in the past. While the US military has been distracted by the twin insurgencies of Iraq and Afghanistan, other militaries (notably China and Russia) have been studying the conditions behind US military’s victory in the Gulf War and have been coming up with ways to counter the advantages that the US displayed in that conflict. A distributed system like passive radar is one such way. Other ways include the usage of so-called “hybrid war” tactics displayed by the Russian military in Crimea, Donbass, and, lately Syria. China has been developing a system of tactics and technologies that fall under the umbrella of “anti-access/area-denial” (A2AD), which focus on keeping the US Navy at a distance, in order to secure Chinese control over the South China Sea and potentially keep the US from coming to Taiwan’s aid if China should choose to invade Taiwan.

      The US military, in my estimation, is much like the British army prior to World War 1. The British military, riding high after its victories in the Napoleonic wars and the Crimean War, was confident in its own abilities and sanguine about the perceived weakness of its adversaries (rising Germany and the fading empires of Austria-Hungary and the Ottomans). As a result, the British military focused on fighting “brushfire wars” against native rebellions in Africa, India and Afghanistan (lol). Because of this, the British army found itself ill-prepared and under-equipped to fight a “high-intensity conflict against near-peer adversaries” (to use the words of modern military analysts).

      There is a school of thought that says that conventional war against near-peer adversaries is impossible, because any such war would inevitably escalate to nuclear weapons. I actually wish this argument was true. It would simplify our threat assessments greatly. However, whenever I hear this argument, I am reminded of all the arguments prior to World War 1 or World War 2 that suggested that a major war was impossible because of the unprecedented destructive capabilities of modern weapons.

      Do I think a war is likely? At this moment, I do not. Can I tell when the next war is going to occur? If I could, I would be working for the CIA, NSA or DoD, with a top-secret clearance. What I do have is a vague sense of unease. This sense of unease comes from the fact that the world today is multi-polar and unstable. The US military no longer enjoys the unchallenged hyperpower hegemony that it had at the end of the cold war. Yet, it still acts and fights as if it does. I also remember that wars, when they do occur, can stem from causes that are extremely surprising at the time. Who would have thought that the assassination of the crown prince of Austria-Hungary by a Serbian nationalist would lead to a war that resulted in the deaths of millions?

    1. 1

      I really wanted to like this. But unfortunately, it appears to be written in Elvish. Why does Gil-Galad need to know algebraic topology?

        1. 1

          From that post:

          Some Righties talk about the idea of a post-political world — the idea that a system with less citizen input, on the continuum from Singapore to monarchy or neocameralism — would be more stable. But in a world without elections, there would still be shifts in power. It’s just that the mechanisms by which power shifts wouldn’t have occasional moments of relative transparency.

          So, reflecting on that, I agree with his premises – that mechanisms by which power shifts happen would have less transparency. But I disagree with his conclusion. It’s not clear to me that autocratic states are inherently less stable than democracies. Yes, autocratic states crumble (as we saw in the Arab Spring revolutions). But democracies crumble and collapse as well. Russia was fairly democratic in the ’90s before collapsing into Putinist autocracy. Thailand had a fairly robust democracy before it was locked down by a military junta. Turkey and Pakistan have flipped between military rule and democratic governance multiple times.

          And on the flip side, dictatorial China, despite all its internal problems, actually appears to be a more responsive state to its citizens than democratic India. While Delhi still has the worst air pollution in the world, the Communist Party has quietly cleaned up Beijing, in response to citizen unrest.

          I think, up until a certain point, competence matters more than representation. As it turns out, people don’t really care by what mechanism the government listens to their needs, as long as it implements policies that improve their daily lives. The hypothesis is that once an economy has fully industrialized, it’s impossible for government to be appropriately responsive to all the diverse interests of the country without democratizing. But the continued existence of autocratic China makes me doubt that theory more and more with each passing day.

        1. 3

          To put this in more rationalist terminology, one of the ways around Goodhart’s law is that you find a measure… but keep it secret. Then, so long as no one else figures out the measure, you can target it without the measure being corrupted by people gaming the system.

          The problem is that the measure will inevitably become corrupted because people aren’t that dumb. I have an anecdote from a friend who worked at Subway. Subway implemented its own version of the lime equation - except instead of counting limes, they’d count bread. So you couldn’t get a free sandwich, because that’d make the bread count come up short at the end of the day. But, you could order a 6-inch veggie, and end up with a footlong deluxe and the tracking system would miss that.

          Another example is college admissions. Good colleges don’t publicize their “lime equations” determining how they determine who gets a spot and who doesn’t. But people reverse engineered the process anyway, simply by using other statistics about the student body that were published by the campuses. So, at this point, it doesn’t really do any good for colleges to keep their equations secret - all that does is disadvantage the people they’re trying to help the most.

          If a metric is high-stakes enough, it will be targeted, whether its secret or not, because no matter how secret the metric is, targeting it will have effects on the real world, and those effects, given time, will be observed and reverse-engineered.

          EDIT: The e-mail volume metric he mentions is exactly the sort of thing that’s easy for employees to game and corrupt. Given time, people will figure out that the people who send a lot of e-mails are the ones the boss favors. Then everyone will send lots of e-mails all the time, regardless of whether they’re getting any work done or not, and the advantage of the metric will disappear.

          1. 1

            This is essentially the objection I was going to post, so I entirely agree with it.

            Relevant to this is this discussion on LessWrong (1.0) about eigenkarma and Google’s PageRank algorithm. Some key quotes:

            Ilya Shpitser:

            This won’t work, for the same reason PageRank did not work, you can game it by collusion. Communities are excellent at collusion.

            Oliver Habryka:

            “This won’t work, for the same reason PageRank did not work”

            I am very confused by this. Google’s search vastly outperformed its competitors with PageRank and is still using a heavily tweaked version of PageRank to this day, delivering by far the best search on the market. It seems to me that PageRank should widely be considered to be the most successful reputation algorithm that has ever been invented, having demonstrated extraordinary real-world success. In what way does it make sense to say “PageRank did not work”?

            Ilya Shpitser:

            Google is using a much more complicated algorithm that is constantly tweaked, and is a trade secret – precisely because as soon as it became profitable to do so, the ecosystem proceeded to game the hell out of PageRank.

            Google hasn’t been using PageRank-as-in-the-paper for ages. The real secret sauce behind Google is not eigenvalues, it’s the fact that it’s effectively anti-inductive, because the algorithm isn’t open and there is an army of humans looking for attempts to game it, and modifying it as soon as such an attempt is found.

            Wei Dai:

            Given that, it seems equally valid to say “this will work, for the same reason that PageRank worked”, i.e., we can also tweak the reputation algorithm as people try to attack it. We don’t have as much resources as Google, but then we also don’t face as many attackers (with as strong incentives) as Google does.

            Ilya Shpitser:

            It’s not PageRank that worked, it’s anti-induction that worked. PageRank did not work, as soon as it faced resistance.

            1. 2

              Thinking more on this, it seems like this is the takeaway:

              Goodhart’s Law is not so easy to outsmart. There are no silver bullets. As usual, there are only a whole lot of lead bullets, and those bullets look like this:

              1. Be really clear on what you are actually trying to maximize.

              2. Find a good and non-obvious measure of that thing.

              3. Keep that measure secret.

              4. Even as you use that measure, never forget what actual thing you’re after.

              5. Keep looking for new, non-obvious measures of that thing.

              6. Abandon the old measure and switch to a new and different one as soon as you detect even a hint of gaming.

              7. Repeat steps 2–5 indefinitely, forever.

              This is hard. It is not a one-stop solution. You can’t ever stop working on this. It requires continuous effort, possibly even continuously increasing effort (as new measures are probably going to be harder and harder to find).

              But I don’t know that there is any alternative.

          1. 2

            This is an excellent review! I just purchased and read this book as well and I concur with all your points. Identitarian politics, transgressive politics, the various online communities, Gamergate, Milo, etc; Nagle did her damn homework on everything.

            On your opinion at the end, where you say “Where this book falls short, in my opinion, is in drawing connections between the online struggle of the alt-right and actual electoral outcomes” and “Nagle is remarkably short of evidence of alt-right online ideology translating into offline action.”

            I don’t think this is necessarily at the fault of Nagle. In my opinion, the data needed isn’t available because simply not enough time has passed for these things to be seen in the physical world. Its only been several months since the election. I think it will be a few more years before we are able to draw any meaningful conclusions.

            1. 3

              We don’t have data per se, but we do have a healthy dose of anecdote:

              “ I arrived semi-early at 3:30 or so, the line moved like molasses even after the doors opened at 4:00. People drove by us with signs bearing anti-Trump slogans like “Get Hate Out Of My State”. A lady drove by and stopped to ask us what everybody was in line for, I shouted that the line was for a Trump rally. She nodded and drove off. A young man behind me shouted after her that she was a cuck.

              . . .

              Turning the corner the full extent of the protest crowd became clear. It was massive, holding white and black or yellow and black signs with a wide variety of slogans. These I have on video so I can quote them directly: “Everett Stands United Against Trump”, “No Anti-Semites In The White House”, “Bigots Should Be Stripped Of Power”, “No Hate In The White House”, “Filipino Lives Matter, who will make the lumpia?”, “Say NOPE to the DOPE” where the “O” has been replaced with Donald Trump’s face. As we stepped toward this crowd (which threw at least one death threat at me) a dedicated volunteer stood stood there to remind us ‘not to feed the trolls’. “

              1. 1

                Those are both good anecdotes and I’ve updated my review with a link to your post, indicating that rallies and other campaign events provide a plausible mechanism for transmission of alt-right ideas and ideology into the mainstream political discourse.

              2. 1

                the data needed isn’t available because simply not enough time has passed for these things to be seen in the physical world

                This is perhaps true, but Nagle doesn’t really do a good job of laying out any hypotheses as to how online activity can translate into offline activism and ideology. Even when she talks about the left, she doesn’t really talk about how online activity directly translated into offline protest. She takes it as a given that the direction of influence runs one way: from online memes to offline ideology. I think it’s more nuanced than that, and that there is influence going in both directions.

                As the book stands, Nagle has laid out the sources of many of the online alt-right memes and how they interact and left it up to us to work out how those memes turn into offline action. I’m certainly not ungrateful for that. I think what she’s done has importance. But it’s not the whole story.

              1. 3

                I have to take issue with John’s description of boiling. Boiling has nothing to do with the size of the bubbles. Boiling is a phase transition where vapor bubbles start to form not just at the edges of the liquid, but also in the middle. And it’s fairly sudden, actually. Maybe I just perceive things differently, but I can usually notice the transition from almost-but-not-quite boiling to the “rolling boil” that’s described in cookbooks.

                Also, I found the anecdote about him arguing with his dad amusing because it so precisely captures the sort of “authoritarian high-modernist” tendencies that make rationalists so annoying to deal with.

                1. 2

                  Excellent essay!

                  I’ve got a comment and a question..

                  Comment. See also: PHP vs. insert your favorite web-app-appropriate language here.

                  Question. Does this pattern mean that no financial incentives exist that are strong enough to overcome this individual-hacker independence? Can the power of Lisp not be harnessed, by a group, for profit? Why not?

                  Edit: Perhaps the answer is here:

                  The Lisp Curse does not contradict the maxim of Stanislav Datskovskiy: Employers much prefer that workers be fungible, rather than maximally productive.

                  The author attributes this to the “venality” and “close-mindedness” of managers, but I wouldn’t be so quick to dismiss it. The expected productivity of an employee, after all, is his productivity × his reliability… and this is exacerbated by dependence of a team on any given employee’s work, etc. Selecting for fungibility preferentially to maximal productivity seems to me to be nothing more than perfectly rational optimization on a manager’s part.

                  (Of course this is different from employees whose main value is to be sources of ideas, etc., such as in research labs and so forth, or for designers (especially design team leads), etc.—but that’s hardly the median case, and anyway is irrelevant to the subject of programming languages.)

                  1. 3

                    I don’t know how much of a case this is with Lisp, but a lot of Forth programmers say that while Forth makes for excellent productivity for a capable single programmer, it’s often quite impossible for any other Forth programmer to work on the first one’s program. The programs end up idiosyncratic private languages that are impenetrable without deep study. So it’s less about workers not being maximally fungible but possibly not being able to hire any new productive workers at all for an existing system.

                    Maybe the whole programming system should be set up in something like this with a new kind of module system. The inside of a module is full of maximally productive weird hyper-expressive language shenanigans, and then somehow there’s a mutually comprehensible interface layer between the modules so that the software can be assembled from these interacting fiefdoms. That’s sort of how Unix and Unix programs worked, with the interface layer being plaintext streams. I’m not quite sure what this should ideally look like. The current libraries and APIs approach isn’t good enough, you’d want to be able to exchange structures, idioms and sublanguages, not just provide a black box with buttons to push. Also you would want to avoid boxing single components in a large system into suboptimal architectures by imposing too many constraints from the surrounding area, this part gets really hard.

                    Maybe Alan Kay is on to something with the whole “cells receiving chemical signals and deciding what to do, not circuits receiving electrical impulses and reacting with lockstep determinism” metaphor he’s been trying to push for the last 40 years. (Also sorta related, Functional in the small, OO in the large.)

                    1. 1

                      Maybe Alan Kay is on to something with the whole “cells receiving chemical signals and deciding what to do, not circuits receiving electrical impulses and reacting with lockstep determinism” metaphor he’s been trying to push for the last 40 years.

                      Do you have any links/references about this? It sounds really interesting, and I’ve not heard of it before!

                      1. 1

                        There’s a bit of it in the Programming and Scaling talk that was linked here a while ago.

                        Some emails about the origin of OO, more on messaging.

                      2. 1

                        By the way, “Functional in the small, OO in the large” is a big part of how Swift works. (Well, how I write Swift, anyway. Non-functional (heh) approaches are certainly also available.)

                      3. 3

                        Does this pattern mean that no financial incentives exist that are strong enough to overcome this individual-hacker independence?

                        Paul Graham:

                        So you could say that using Lisp was an experiment. Our hypothesis was that if we wrote our software in Lisp, we’d be able to get features done faster than our competitors, and also to do things in our software that they couldn’t do. And because Lisp was so high-level, we wouldn’t need a big development team, so our costs would be lower. If this were so, we could offer a better product for less money, and still make a profit. We would end up getting all the users, and our competitors would get none, and eventually go out of business. That was what we hoped would happen, anyway.

                        What were the results of this experiment? Somewhat surprisingly, it worked. We eventually had many competitors, on the order of twenty to thirty of them, but none of their software could compete with ours. We had a wysiwyg online store builder that ran on the server and yet felt like a desktop application. Our competitors had cgi scripts. And we were always far ahead of them in features. Sometimes, in desperation, competitors would try to introduce features that we didn’t have. But with Lisp our development cycle was so fast that we could sometimes duplicate a new feature within a day or two of a competitor announcing it in a press release. By the time journalists covering the press release got round to calling us, we would have the new feature too.

                        It must have seemed to our competitors that we had some kind of secret weapon– that we were decoding their Enigma traffic or something. In fact we did have a secret weapon, but it was simpler than they realized. No one was leaking news of their features to us. We were just able to develop software faster than anyone thought possible.

                        1. 2

                          Yes, I’ve read that essay too, but—why only that example? Where are the others? If Lisp is so good, why isn’t everyone using it? Surely the fact that Paul Graham, of all people, used it, to make money, and now sings its praises, ought to spur a whole host of people to check it out, discover it, put it to use, and profit thereby? Why isn’t it way, way, way more popular?

                          1. 1

                            It seems like when people try to use it, they find the ecosystem nasty. The very visible classic example of people buying pg’s Lisp evangelism, actually building a thing in Lisp and getting burned was the original Reddit. Another report of a reasonably capable newcomer trying to grab onto Common Lisp and bouncing off was Steve Yegge. It seems like you really need people building a common ground of solid libraries nowadays, and can’t really build solid stuff in an ecosystem where everybody has their own 80 % solution and the venerated standard solution was informed by 1980s computing practices.

                          2. 2

                            I have issues with that essay. Paul Graham never seems to acknowledge the selection bias that using Lisp imposes. It’s entirely possible that the reason ViaWeb did well isn’t because it was written in Lisp, but because it attracted the sort of people who learn Lisp. It’s entirely possible that if he’d attracted those same programmers, but somehow managed to get them to use Perl, he’d have been just as productive.

                            Lisp was not (and still is not) a “mainstream” programming language. That means you have to go out of your way to learn Lisp. The sorts of programmers who go out of their way to learn obscure programming languages and libraries are more likely to be more productive (or, in Silicon Valley terms, “passionate”) programmers who would be able to bring that additional productivity to bear no matter what language they were using.

                            1. 1

                              Selection bias seems like it could explain a lot of this puzzle.

                              The question is, do we find the same effect for other, comparably obscure / non-mainstream languages?

                        1. 2

                          I would be really interested to know how they did it. The fact that it’s precise down to the micron implies that it was a laser scan, but beyond that I can’t glean any further details about their methods.

                          1. 3

                            For the problem of verifying video, Bellingcat has a great guide that goes into how one can verify video content by looking at the context and contents of the video.

                            1. 3

                              Related: https://weblogs.asp.net/alex_papadimoulis/408925

                              Alex Papadimoulis brings up an analogy to a carpenter building a shelving unit asking, “Which is best to drive nails? And old shoe or a glass bottle?” Advice at the operational level would be to compare and contrast the characteristics of shoes and glass bottles and try to determine which would be best to drive nails in this circumstance. Advice at the tactical level would be to say, “Go the hardware store and buy a hammer.” Advice at the strategic level would be to ask, “Why are you using nails here?” And finally, advice at the mission level would be, “Why are you even building a shelving unit?”

                              1. 3

                                I agree with this, and yet I also object.

                                Half of the reason I object is outlined in this comment. Here’s the other half.

                                Sometimes, doing something “the actual right way” instead of “the best way, given the constraints of the current advice level”, has costs.

                                Example:

                                “Which is best to drive nails? An old shoe or a glass bottle?” “Go to the hardware store to buy a hammer.” “Yes, very good, of course a hammer is best, but there are zombies outside, and they will eat me if I leave the house. Now, about my question?”

                                The equivalent of the zombies, in the context of software engineering, is the cost of a redesign, of switching, of re-learning, etc.

                                Take a real-world example. Suppose someone looked at OborWiki and said to me (apropos of me asking them for some advice on implementing a new feature, perhaps): “Obormot, your wiki system is built on PmWiki, which runs on PHP. But PHP is a terrible language. My advice to you is that you should switch to Rust.”

                                Is this useful advice?

                                How much effort would it take to “switch to Rust” (or to anything else)? I’d have to re-implement the entirety of the PmWiki software (which has been in active development for a decade and a half), AND all the third-party features, AND then I’d have to… but it doesn’t matter what I’d have to do next, this is already half a lifetime’s work, and for what, exactly? Well, it would be more correct / good / optimal. Ok. Obviously this is absurd advice, and the only sensible response is a dismissive eyeroll.

                                Certainly you might say “if only you’d done it the right way from the start…”—but as usual: a lesson is learned, but the damage is irreversible.

                                (Note, to pre-empt an obvious but flawed response: this is not a question of sunk costs! It’s a question of what it costs, going forward, to switch to “right” approach on a higher advice level. Often, the cost is so high that the actual choice is “continue to optimize on the current advice level only” vs. “abandon the entire endeavor”.)

                                1. 2

                                  Also, re: Papadimoulis’s specific question on what sort of response to such questions is more useful:

                                  The operational-level advice may be problematic for exactly the reasons he states. But the answer he gave (or would give) is useless! It’s nothing more than “UR DOIN IT RONG” in more words.

                                  Ok, so that’s a bad way to store things in a database. Why? In what way? What’s a better way? What is even the category of the problem? Come on, man! Don’t just say “RTFM!!”—give some pointers; some keywords to google, some vague gesturing in the direction of the thing topic, something!

                                  That sort of “ur doin it rong” response is very hard to distinguish from noise, and even if the advisee takes it to heart, it’s not very actionable; “read a database book”? who has time for that? Yes, that’s the ideal way to go, but if you require someone to read a whole database book before they can proceed on their current task, then they just won’t read anything and will proceed anyway. Whereas if you point them in the direction of some more specific advice, well, then they might actually learn something and improve.

                                1. 2

                                  I think this is a pretty neat project, and I wholly agree with the motivation behind it. Especially in this modern era of computer aided everything, designers don’t really have a good feel for what materials behave like in the real world. I was listening to a machinist friend of mine complain earlier about how computer aided design tools made it extremely easy to overspecify precision, without realizing that there is a big difference in cost between ordering a part that’s 10.0 centimeters long and 10.00000 centimeters long. In the old days, when you had to draw things out by hand, this difference was apparent, even at the drafting phase, because you’d have to use an entirely different set of rulers, etc for the high-precision stuff. But today, it’s as simple as hitting zero a bunch of times, and there’s nothing in the CAD tools that says, “Hey every zero you’re adding is doubling the cost of the part.”

                                  1. 1

                                    The problem is… I don’t see it. There are too many layers of indirection between the common person and the world of Silicon Valley for him to have a sense that he or she is owning much of anything. At best, he or she realizes that his or her money is going into something called the market. If he or she is very astute, they may even realize that the market is composed of individual companies, whose stocks vary somewhat independently. And this market has been going up as of late, so it’s a good idea to invest.

                                    I don’t think anyone has made the connection to robots. I think what we’re seeing is the normal (if somewhat delayed) economic recovery from the biggest financial crisis since the Great Depression. I don’t think you need to bring in additional explanatory factors.

                                    EDIT: I just looked up what FANG stood for. Apparently it means Facebook, Amazon, Netflix, Google. And all I can hear when looking at these four stocks is, “One of these things is not like the others. One of these things doesn’t belong.” More seriously, I get that Facebook, Amazon, and Google are all working on AI and robotics innovations that could put people out of business? But Netflix? Netflix is metamorphosing into a media company. They’re producing their own shows. I don’t see how owning Netflix gives me a share of the upcoming robot revolution, and I think that it’s inclusion was merely to make the cutesy acronym. The other abbreviation I’ve heard is GAFA (Google, Apple, Facebook, Amazon), and that at least has companies that are all leaders in either AI or automation.

                                    EDIT2: Fixed the fact that edit 1 was in between paragraphs 1 and 2

                                    1. 3

                                      This is my argument for memorizing things, even in this age of having near continuous access to the Internet. It’s impossible to “think critically” about something without having something to trigger the reflex that engages system 2. For me, having context or other historical data in long term memory means that when presented with more information, I have more opportunities to think, “Wait, that’s not right,” and closely examine the information being presented, rather than just taking what’s presented at face value.

                                      1. 2

                                        Indeed.

                                        I’ve long said that the “external brain” idea—where the smartphone in your pocket, with its access to Google and Wikipedia, is seen as no different than having all that knowledge in your head—doesn’t hold water; but I always have some trouble explaining my view to skeptics. What you said is a good crystallization of it. It is, I think, not about whether the knowledge may be retrieved on demand, but whether understanding can be synthesized from it; and that is a process which, for actual memories in your actual brain, is ongoing at all times, and largely unconscious. And we often can’t know in advance what information or knowledge is relevant to what topic; so having to seek it out, in any given circumstance, means that we won’t find all the relevant stuff that would have been supplied by our brains if we’d had it all in our head.

                                        1. 3

                                          If you’re talking to nerds (which you probably are, if you’re talking to people who believe seriously in the ‘external brain’ concept), you can say, more succinctly that the difference between having information in your head and having information on your phone is analogous to the difference between having data in L2 cache and having data in main memory. In terms of the ratio between lookup times, it’s pretty similar.

                                      1. 2

                                        I found the “case study” to be a rather easy case. In that case, everyone agreed that the person was a low-quality contributor and the person agreed to leave when asked politely. What happens when that isn’t the case? How would you handle the situation where there’s a low quality contributor who has friends that are willing to defend him, and isn’t willing to go without raising a huge fuss?

                                        1. 1

                                          Like the book says, in the given case, the project could in fact have banned J. Random; they didn’t end up having to, but that was certainly an option. So that takes care of “what if they don’t want to leave”.

                                          As for friends willing to defend him—well, that’s the point of “gather allies”, isn’t it?

                                        1. 2

                                          As to the linked comment itself—well, it’s quite right, of course, about what late-stage communism looked like… but I am hesitant to say that “premium mediocre” is a good analogy or description. This is mostly because I still don’t understand what the heck “premium mediocre” is supposed to mean, nor am convinced that it’s a sensible category. (We talked about this on IRC and didn’t really reach any conclusion; does anyone here think they have a good account of this concept? Or is Rao just talking out of his ass on this one?)

                                          1. 3

                                            I think ‘premium mediocre’ is one of those things where the author hasn’t entirely digested their idea yet, or they haven’t really completely felt out its exact gradations. Let me try, what is premium mediocre?

                                            At the object level, premium mediocre is fairly simple. It’s a sprinkle of golden chips on a block of lead. It’s paying for fancy options on a frankly un-fancy product. Premium mediocre is a ricers hot ride, or all the options on an economy car.

                                            At the narrative level, premium mediocre is a social performance of wealth and convenience in an environment where neither are really sustainable for normal people. The point of the performance is costly signaling at its grotesque finest: prove that you’re an upward aspirant by showing you have the attitude of aspiration, even if what that mostly amounts to is pretending to be someone who isn’t going to be a cab driver. The basic idea is that we’re now in an economy which is incredibly high variance. to the point where there is a minority of the population which has real valuable skills. Everyone elses hopes are to be precariat gig economy cosmic garbage or somehow manage to procure a bullshit job of some sort. This is the ‘new economy’, and Maya Millenial is trying to compromise on this whole ‘meaning versus financial situation’ spate of existential worries.

                                            The thing is, the important thing which distinguishes ‘premium mediocre’ from many other similar performances of gaudy pseudo-wealth is that nobody is fooled. Maya is well aware she’s full of shit, her employers mostly want to have her real economic situation be illegible to them so they can optimistically think the best of it, parents want to see their child thriving so this performance is partially a way to avoid the pain of having to acknowledge that they aren’t.

                                            1. 2

                                              I agree with your description of what premium mediocre is, but I disagree with Rao’s use of it to explain civilization on a grand scale. I think “premium mediocre” is a useful concept to describe things that have the trappings of fanciness without any of the actual quality, and while I appreciate having the punchy tag to attach to the concept, I don’t think “premium mediocre” is anything new.

                                              To pull out some earlier examples, one of the reasons the “gilded age” was called that was because everything in it was premium mediocre. Things were gilded, rather than being made of gold. It was a hollow society, with a thin layer of extremely rich people, and a bunch of industrial proles toiling in the factories below. Those proles consumed “premium mediocre” goods like newly cheap ready-to-buy clothes or novel consumer goods (stainless steel tableware), which gave them some of the trappings of the actually rich without having having the actual qualities of the goods that the rich consumed. Ready-to-buy isn’t tailored haute coture, and stainless steel isn’t silver.

                                          1. 3

                                            I have Opinions about this essay.

                                            Thankfully, I’ve already said them, on SSC, so I’ll just link to my comment there.

                                            1. 1

                                              Also, ironically, Stephenson switched to OSX as soon as it came out. So there is that to consider.

                                            1. 2

                                              This is great.

                                              I was also reminded of Fermi estimates when I read this. This is an excellent example of why I am very skeptical of them, especially in domains where the estimator has no, or little, domain knowledge. There’s really no way to know whether the numbers you’re estimating are even remotely in the same ballpark of order of magnitude, and there’s no guarantee at all that you won’t get most of them basically right but then will be off by twelve OOM on one of them!

                                              Of course, my skepticism could be biasing me here. I wonder if it’s possible to give any sort of general account of in which kinds of problem domains high-OOM errors are possible and not exceedingly rare, and in which they are? This seems like it would be a useful approach to answering “what kinds of problem domains are likely to yield to Fermi estimates, and, conversely, what domains it is dangerous to attempt Fermi estimates in”? (And what affects the answer? Is it only subject-independent characteristics of problem domains, or do a subject’s own biases and background knowledge (or lack thereof) overshadow differences in domains? Or am I thinking about this all wrong?)

                                              1. 1

                                                Fermi estimates aren’t really estimates and should not be treated as such. They’re upper (or lower) bounds. Maybe we should call them Fermi bounds instead.

                                                For example, let’s try to figure an upper bound on the number of piano tuners in New York City. I know that New York City has somewhat less than 10 million people, so I assume that the population of NYC is 10 million. And of those 10 million, let’s say 1 in 10 people own a piano. Given my personal experience, far fewer than 1 in 10 people in real life own a piano, so even if the piano ownership rate in New York is far higher than the piano ownership rate on the west coast, 1 in 10 should cover it. So that gets us an upper bound of 1 million pianos. Now, let’s say that each piano has to be serviced once a week, and that each piano tuner can only service a single piano per day. Assuming that piano tuners work 5 days per week, this means that we have an upper bound of 200,000 piano tuners in NYC.

                                                Now, is this anywhere close to the real answer? Almost certainly not. But it does establish an upper bound. If someone were to claim, for example, that there were a million piano tuners in NYC, I’d be able to say that’s a pretty dubious estimate. But if someone said that there were 50,000 piano tuners in NYC, the Fermi bound doesn’t give me any data.

                                                Fermi bounds work well when you have a good sense of the absolute upper or lower bound of your data. I know that there are fewer than 10 million people in NYC. I know there are fewer than a trillion stars in the Milky Way. But I don’t have a good sense of what the upper bound is how many atoms of gold there are in a gold ring. Moreover, chemistry is one of those fields where things can vary by many orders of magnitude. Computer science is another such field. Fermi estimates tend to fail in those situations, simply because the amount of variance in the data overwhelms our ability to put bounds on it. Fermi estimates come from cosmology, where things only vary by four or five magnitudes, and you have pretty good handles on the upper bounds for values like, “number of stars in the Milky Way” or “number of galaxies in the observable universe”. They work less well in other fields that have greater variance in the magnitudes of their numbers.

                                                1. 1

                                                  Yes, this is a good example of what I mean:

                                                  so even if the piano ownership rate in New York is far higher than the piano ownership rate on the west coast, 1 in 10 should cover it

                                                  What if some institutions have many pianos? Are pianos the kind of thing that a school or a concert hall might have dozens or hundreds of? I have no idea!

                                                  let’s say that each piano has to be serviced once a week

                                                  Why? What if it has to be serviced daily? Do I know that to be false? I do not!

                                                  and that each piano tuner can only service a single piano per day

                                                  Why? What if tuning a piano takes a week? Do I know that to be false? I do not!

                                                  Now our upper bound is higher by at least an order of magnitude, maybe more; and a million piano tuners no longer seems weird.

                                                  Of course, if there were a million piano tuners in NYC, I’d have met one, or heard of such people existing, at least, whereas in fact the only reason I even know there is such a profession as “piano tuner”[1] is from this exact silly archetypal interview question—but that sort of reasoning has nothing to do with Fermi estimates!

                                                  [1] Actually, the way the question is phrased, it doesn’t exclude the possibility of “piano tuner” being a piece of equipment rather than a person, so maybe this whole approach is wrong! Why am I assuming a “piano tuner” is a person who tunes pianos? I don’t know anything about what “tuning” a piano even is, or means. Heck, maybe a “piano tuner” has nothing at all to do with pianos, and the name is coincidental or something.


                                                  My point is that “the numbers in this field have great variance” is not the only failure condition of Fermi estimates; “minimal domain knowledge” is another, because while the numbers in a field may have (relatively) low variance, subjectively the variance may nonetheless be arbitrarily high, because you personally have no idea what range of numbers to expect (on account of your lack of domain knowledge).

                                              1. 1

                                                Bad analogy. I am talking about books in general, sentimental value applies only to specific books.

                                                I loved my teddy bear*, but I don’t claim all teddy bears are sacred, or that everyone should love teddy bears, or that anyone who doesn’t is fundamentally flawed in some way, or that the destruction of a teddy bear is inherently wrong.

                                                But that’s just the thing. You might not love each and every teddy bear, but someone does. Thus, even if a particular physical book is not valuable to you, I would say it’s valuable to someone. I find the rationalist tendency to treat books as disposable objects somewhat abhorrent.

                                                1. 1

                                                  There are a lot of particular physical mass market paperback books in the world right now. You need to work pretty hard to get all of them matched with affection from an actual human. I think this is actually going to change now that ebooks and e-readers are an established thing. The physically nasty books that basically just serve as maximally cheap text delivery platforms are first on the line to get dropped in favor of ebooks. The physical books that have a better value proposition against ebooks are nice, more expensive durable hardcover ones. And there might be fewer of those overall and biased to books people want to keep around on their bookshelf instead of reading through once and then forgetting about.

                                                  1. 2

                                                    There’s something to what you say, I agree, but ultimately I think that drawing the distinction between “softcover” and “hardcover” takes this in the wrong direction. I have some softcover books which I value quite a bit, more than I value some of my hardcover books. The actual spectrum of value is perhaps not entirely orthogonal to the book’s physical form factor (there is a correlation, for various obvious reasons), but it’s orthogonal enough that I don’t consider it to be a very useful dimension to think about. The following factors all seem more relevant:

                                                    • Content
                                                    • Availability/rarity
                                                    • Uniqueness (related to the previous point but not identical to it)
                                                    • Personal history with the book

                                                    (Some examples, from glancing at my bookshelf: my hardcover mass-market volumes of Sergei Lukyanenko’s less… artistically valuable… novels, vs. my softcover copies of Gödel, Escher, Bach and The Cyberiad; a hardcover copy of Scott Adams’ The Dilbert Principles vs. a softcover copy of Robyn Dawes’ Rational Choice in an Uncertain World; etc. Also, I have some beautifully custom-bound, hardcover, gold-embossed volumes of mass-market cheaply printed Soviet children’s books; where do you classify those? And this is just the tip of the iceberg, example-wise…)

                                                    1. 1

                                                      Yeah, I was mostly going for the “correlation for various obvious reasons”, and that there are currently probably lots of physical books that are of pretty low value, not an absolute demarcation at exactly what the non-valuable books are like.

                                                      1. 1

                                                        Yeah, agreed that there’s a correlation. What occurs to me about this is that this particular correlation is probably strongest for relatively current books, with large publishing runs, but gets weaker as you go back in time and out towards the tail of the popularity / rarity distribution.

                                                  2. 1

                                                    Agreed, and this is one of the tendencies that set me off from “rationalist culture”. (I guess many folks in said culture found the shredders in Rainbows End totally unobjectionable, but to me they were an element of “mundane dystopia”…)

                                                  1. 2

                                                    Question: does anyone here have opinions about whether the content of this article can be taken at face value or anything like it? Is HTTP really obsolete? Is IPFS really a good replacement for it? (Bonus question: is this article actually mostly an advertisement for Neocities?)

                                                    1. 2

                                                      So, I discussed this on IRC, but I can consolidate and summarize my thoughts here. I don’t think this article can be taken at face value. At the very least, no browsers yet support IPFS. IPFS support on the server is still nascent. IPFS does not solve the persistence problem any more than Bittorrent does. There will still need to be a server out there “seeding” the content onto the IPFS network in order to prevent it from disappearing. IPFS, in its current form, is a dynamic CDN, which automatically caches and redistributes the most popular data. That’s pretty neat, insofar as it means that you can host content on some cheap shared hosting solution and not worry about a frontpage post on Reddit or Hacker News melting your site. But without some solution to the incentive problem of getting people to host content they’re not interested in, I don’t see IPFS replacing HTTP as it currently stands.

                                                      1. 2

                                                        I can’t vouch for IPFS as the specific implementation, but think that a named data networking infrastructure for accessing static data is an obvious missing piece in the current internet infrastructure. The new wrinkle here is that this would mean splitting internet pages into things that can never change (a specific image, an archived web page) and addresses to things with changing content (an ongoing comment thread, the front page of a news site). Currently we treat everything as the second type of thing, which makes it easy to not think about things that need to change as a separate category, but also make link rot happen.

                                                        The big draw for an IPFS style system for me is that it would make personal archive caches much more automatable. Static websites from years ago and miscellaneous PDF documents could be indexed with hash identifiers that are guaranteed to fetch the correct content if it is still somewhere to be found.

                                                        1. 2

                                                          What would be really interesting would be to see IPFS merge with filesystems, so that there is literally no distinction between your local cache and the network. When you (the user) request data, your computer fetches it, transparently, regardless of whether it was stored locally or available on the Internet. It would be the apotheosis of Sun’s old motto, “The network is the computer.”

                                                      1. 4

                                                        The problem is that, as an interviewer, I have 45 minutes. Realistically, 15 minutes are going to go away on introductions, and allowing the interviewee to ask me questions about the position. So I have 30 minutes. This is way too much to cover in that time. I can cover maybe one or two aspects of the task, but to cover the entire set would be tantamount to conducting a full interview loop.

                                                        I think Steve Yegge is forgetting the purpose of the phone screen. The purpose of the phone screen is to filter out the bozos. It’s to filter out the people who’re blatantly unqualified. I’m surprised that Steve thinks this way because Amazon’s interview training for phone screens could be summarized as, “If you think the person has a 50% + epsilon chance of passing a full on-site interview loop, vote yes.” Obviously, with a standard like that, you are going to have a substantial number of candidates who come in and just bomb the loop.

                                                        Recent Comments