>Well, it sounds like you went from being an atheist to an agnostic. That certainly is a more rational and less dogmatic position to take, based on what we know.
I'm not sure why this argument shows so often (and it's not even limited to religion), but it's an incredibly bad one. The positions here are not a ternary matter. The options are not limited to 100% one way, 100% the other, or right in the middle.
The 'most rational' position lies somewhere in that continuum. When converting to the usual trinary labels, one extreme or the other may end up being the most rational position. The center is not automatically best.
> I was a staunch atheist before psychedelics, now the notion of a purely material universe seems naive.
Continuing to try to avoid the religion topic directly, I'd caution you against reading too much into your trips, which your excerpt oddly points out as well. Our brains are rather questionable devices for perceiving the world as it really is even when they are functioning properly. There is no reason to believe intentionally screwing with it's functioning would somehow increase it's ability to perceive the world truthfully.
I can see potential value in using trips to improve lateral thinking, and perhaps even for insight into yourself, but in terms of exposing hidden secrets of the universe... There's no rational basis for that, and the most likely explanation seems to be the combination of the intentional misfiring and the usual imperfect operation of our brains.
I don't think anyone here is saying that using psychedelics is going to 'expose hidden secrets'. It's more that they can help you become aware of the holes in your usual way of thinking.
One pattern I've noticed while reading reports of bad trips is people being so miserable during the experience that it forces them to acknowledge that existing in this universe can be really really shitty at times, which shattered their notion that everything's going to be okay due to their limited experience as a relatively well-off white person in 21st century USA. The reports are often written as if whatever drug they took is to blame for their new existential crisis, but I see it more as the fault of the bubble they were in. If they come out of the experience having more compassion for others who are less privileged then I'd say it's an improvement.
> There is no reason to believe intentionally screwing with it's functioning would somehow increase it's ability to perceive the world truthfully.
Right, my point is that psychedelics are great for exposing falsehood in your thinking, but not necessarily uncovering truth.
> The 'most rational' position lies somewhere in that continuum. When converting to the usual trinary labels, one extreme or the other may end up being the most rational position. The center is not automatically best.
What are you talking about? I think you're misrepresenting my position. I'm nowhere on the continuum, I'm agnostic.
"I don't have enough knowledge to make a judgement of whether the moon is made of cheese" is a perfectly valid statement. And if you base your personal beliefs on whether they'll impress other people, then I suspect you're making your own life harder for yourself than it needs to be.
just wanted to say that i don't understand why you are getting downvoted for this. i found your original comment, and especially the blog excerpt quite reasonable.
> it sounds like you went from being an atheist to an agnostic. That certainly is a more rational and less dogmatic position to take, based on what we know.
Likely because poster is assigning some linear value on metaphysical positions that can not be constrained in such a system.
Agnosticism is more rational than Atheism "based on what we know"? Come on now.
I think this mostly comes from being less than totally confident socially, or perhaps even having some social anxiety.
To my knowledge I've never accidentally harassed somebody and can't imagine it happening. But it's still something I worry about because I'd really hate to do that to somebody.
It just adds another layer of concern onto an already somewhat uncomfortable proposition.
I even found one of my good friends (who is female) by offering help in a CS lab. I still worry about these things, and all things being equal, I know I have and would ask a question of the guy sitting next to me rather than the girl. It's just easier.
I think you are seeing a more extreme position than they were taking.
It's not that there isn't value in having friendly relationships with your colleagues no matter their gender. The issue comes up in comparing the value of a specific relationship with the potential costs of it (or attempting to create it).
Right or wrong, befriending a male colleague has virtually zero risk. The chances of anything happening to severely damage your career or social standing are essentially nil. Even in a severe situation, there isn't much you can do to cause a problem without acting in a pretty horrible way that's also documented. There just isn't much you can do there to really provoke a highly emotional reaction or scare HR.
The same cannot be said of attempting a relationship with a woman. People are very sensitive about sexual harassment and HR wants absolutely nothing to do with it. The exact lines for sexual harassment are necessarily a bit blurry. Even if they existed, continually just barely not crossing them would seem like harassment to me.
Is the friendly relationship with a woman coworker so much more valuable than a man that it is worth taking on additional risk? I don't think so.
I don't intend to have a particularly strong relationship with all my coworkers and I imagine most people are the same. This means I get to be choosy about which ones I engage in this way. The risk-reward ratio just doesn't seem favorable to engaging women in this way.
Now, personally speaking I have slightly more female friends than male friends, unless we're counting people I might talk to once every few years. My oldest friend, by far, is female. I would never suggest men should not be friends with women. That's insane. I'm just not sure what the incentives are for me to try to specifically befriend female coworkers.
I don't think these statements are as absolute as you take them. I'm also not sure where you go if "almost certainly did(n't)" is no longer sufficient. At that point you basically have to give up on saying anything about anything.
Science doesn't (intentionally) forget about unlikely alternatives. That doesn't mean there is any good reason to treat highly unlikely scenarios as the equals of the vastly more likely cases.
And arguing anything about chances based on things that have already happened is kind of bullshit. They already happened, however unlikely. We know unlikely things can and do happen occasionally. That does not mean other unlikely things are more likely to happen.
And if you really insist on doing that, keep in mind the timespans here also differ by orders of magnitude.
Heh, the math will never even get brought up in the trial. They'll just talk about DNA matches and everybody on the jury will think this guy is scientifically proven to be guilty.
I think many of these issues can be addressed with mechanisms proposed by the author.
Mainly, the more complex tags which can themselves refer to other tags.
1. This is probably the trickiest one. You may be able to do some sort of translation between a hierarchical system and the tag system using tags themselves. You could have a series of tags that refer to each other, such that the hierarchical location is essentially encoded in the tags themselves.
2. Again, maybe just special tags?
3. Yeah, again, tags. Just tag the thing with the media it's on.
4. Aside from the basic UI side of things which should help, there is the idea of shared tagging systems. I don't recall if that came from the author or another commenter on HN. And you can basically ask the same question about hierarchical systems. It's not exactly a solved problem there either.
5. Again, the complex tags. Just make a tag for the project.
6. Obviously UI is a big question. I'm not sure how it relates so much to media-specific browsers though. They basically present a different view of a section of a filesystem. You have to do some work to let them do this, or else use a system like iTunes and buy all of your media through them.
7. Although I feel this is well addressed by the author, one thing I think you aren't considering is that each of these applications requires their own setup in order to provide that view. You often can't just take the directory from one of these programs and use a different program to view it and have it all work properly. If you only have one program for each media type and never want to use anything else that works, sort of. Many years ago I directed iTunes to redo the file layout for my music collection and rendered it effectively useless for direct browsing. I never really recovered from that due to the time involved to sort it out.
And mutability isn't totally handwaved away, again with the complex tag system you could tag mutated works with a reference back to the original. This doesn't cover the case where you don't wish to retain the original, but then you could just do a simple find/replace with the old and new hashes in the simplest case.
I think you're missing some of the subtleties of solving these problems using "just more tags."
In a hierarchical system, a lot of these organizational issues are local. If I have one directory that consists of a project organized one way, and another directory that consists of a different project organized a different way, those different organizations don't really interact with each other in any way.
If you are using tags for everything, in order to avoid weird mishmashes of different ways of using tags, you would need to either have a completely standardized tagging system that everything used consistently, or you'd have to always include various contextual information in your queries or in your browsing in order for the queries to make sense. For instance: [mount: my-hd][project: my-project][type: jpeg]
I think you overstate the problem with different applications as well. For a large amount of the metadata that is relevant for these applications, there is a standard tagging system. ID3 for music, EXIF for images, XMP for various image and video formats. It's true that there is some metadata that these applications store in proprietary databases, but that's mostly an issue of it being difficult to come to a consensus on standards that meet everyone's needs, and it's easier to just write some proprietary metadata somewhere. With tagging systems, if there wasn't agreement on the schema of tags, you'd still have the same issue.
I don't think it's a bad idea to consider alternatives that are more general and more flexible than what we're doing now, but I do think that it's pretty easy to handwave about how nice a tag based system would be, but a lot harder to solve all of the little problems that are going to come up and turn it into a real, coherent, working whole, and then getting enough critical mass so that it is used outside of a small niche with a handful of applications.
I'm sure you're right that there are a lot of overlooked subtleties. That said I'm not sure some of those problems you mentioned would exist, or at least I'm not sure they would be any worse with a tag system than a hierarchical one.
For example, how is that example query any worse than the current situation? Right now you'd navigate to the project directory (requires specifying more than your example already) and then use some search method depending on OS/WM/etc. And then you still end up with a big list of jpegs to look through. This is sort of a worst-case example for both systems, and still I think the tag system comes out ahead here - by a little - just because it would give you the ability to spread the project across multiple drives without requiring you to do two searches if you don't know which drive the desired image is on. You can improve the situation for either system by manually specifying more information. Put better tags on the images or put them in more specific directories or title them.
As for specific applications, it's not the metadata encoded into the files that I'm talking about. It's as simple as the directory structure itself that is used to store all of this. I can't have one application organize everything and then trivially point another application at the directory and have it work.
With a tag-based system this starts to change. I don't need to tell a new music player where my music is, and then go through whatever process is needed to let it properly work with the current directory organization. At worst I tell it which tags to include or perhaps exclude. From there many options exist. Maybe it pulls in metadata from the files themselves. Maybe I provide an external file in whatever format. Maybe I tell it which tags to associate with which fields. You could do a lot of things here.
I also won't end up telling the application to reorganize things as I did many years ago with iTunes, which promptly made it nearly impossible to wade through my music manually. I had it sort everything into directories based on the artist with subdirectories for albums. It sounded great, until I remembered just how much music I had off OCRemix, where an album is a large collaboration between many people. All of those albums were ripped apart. Ironically, I also had some standardization issues with things like artist names which caused more trouble. Once I stopped using iTunes I basically abandoned that collection because of the work required to fix it.
Yeah, standardization is going to be sort of a problem, but I don't think it's quite as big of a deal as you think. For one, the OS is going to ship with a bunch of standard tags just for itself to work. There will also just be a lot of really standard stuff people are interested in that can be shipped with them. You also have file extentions, for both specific extentions and also generally what kind of information they contain. And finally there is just good old translations. The hierarchical system basically utilizes all these methods and suffers from the same problem - namely you can put directories wherever you want and name them whatever you want. Same problem, different manifestation.
I think the biggest benefit would come from a system that can present itself either hierarchically or tag-based. They both have merits. I've already presented some ideas on how you could store the hierarchical structure in the tags. I'm not so sure how you store the tags in a hierarchical system directly. You could probably fake it with a separate datastore easily enough though.
Finally, when did this discussion of general design goals turn into one of a real-world implementation, much less widespread adoption? I'm not sure how this is relevant.
I believe the point the parent is trying to make is that when you spend the time to learn Visual Studio, vim, etc. you can transfer that knowledge to multiple languages. With Pharo those tools are limited to, well, Pharo.
The benefits of learning a particular tool are essentially proportional to how often you use that tool. The greater flexibility of other tools such as vim means that the cost of learning it can be amortized over essentially as many projects as you wish. The costs for learning Pharo tooling are only spread over as many Pharo projects you do. If you don't believe you'll be doing all that much in Pharo, it makes it a fairly daunting proposition to invest that time.
Not saying I agree, but he leaves room open for other alternatives such as hitting a motorcycle while weaving within a lane. You won't hit a car that way, but you might take out a lane splitting bike.
I see this sentiment a lot. Intuitively it seems like it should be true, but I don't think the case is really quite so clear cut.
The costs involve way more than just the initial development. Maintenance eats up a huge, perhaps even a majority, of the total cost as well. And outages or other failures can be very expensive too.
It's also important to keep in mind that this isn't an all or nothing situation. We can have software that is more reliable without asking that it chug away without issue for a decade, or anywhere near as long as we expect bridges or buildings to last.
The process of developing more reliable software isn't necessarily more expensive than less reliable software. It can even be cheaper. I'm struggling to find the links (maybe somebody else has them handy, or I'll edit them in if I find them), but there have been a few case studies done a few years back by companies that moved to using Ada. In addition to the benefits of more reliable software, they also found development costs were better or at least no worse than C. I know that isn't exactly the language to compare to these days, but as I said these were done some time ago.
This is just my own argument, but I suspect that's because the same problems that ultimately cause problems after release also cause problems during development. With a more reliable programming system/environment, problems that might show up later during development are shown to be an issue immediately. This means the issue doesn't need to be tracked down, which can take some serious time. The developers are even fresh on problem area.
Personally speaking, I've been totally won over by Ada. It ain't perfect, but it's a hell of a lot better than anything else I've seen - and I've looked a lot. In my own projects (mainly personal or for school admittedly) development is much easier and ultimately quicker. I don't have to spend a day tracking down a weird bug because the compiler let's me know about the issue as soon as I try to cause it.
>The process of developing more reliable software isn't necessarily more expensive than less reliable software. It can even be cheaper. I'm struggling to find the links (maybe somebody else has them handy, or I'll edit them in if I find them), but there have been a few case studies done a few years back by companies that moved to using Ada. In addition to the benefits of more reliable software, they also found development costs were better or at least no worse than C. I know that isn't exactly the language to compare to these days, but as I said these were done some time ago.
I can believe that. Ada catches a lot of errors you would normally only notice by extensive testing at compile time. You're preaching to the strong-static typing choir here. I believe Ada and Rust could solve a lot of problems of companies working with C/C++ and make development cheaper. You can properly model your domain and abstract without sacrificing safety.
I'm also a strong believer that TDD makes you much faster and safer in the long run.
My experience tells me that most tools, languages or methods that catches errors earlier will save money.
Ada also has the best tested compiler I can think of.
However my larger point was about the engineering processes not the language itself. I think with languages and tools you can make it easier to make good software. The 100x time and cost is more in the sense of process changes when you're working on safety critical systems. How everything has to be traceable from requirement to test, how there are mandatory reviews before any code change that need to be documented, how there are qualification criteria for the toolchain, etc. All these things cost a lot of time and manpower, with arguably very bad cost-benefit analysis, which is only really worth it when human lives are at stake.
> The 100x time and cost is more in the sense of process changes when you're working on safety critical systems. How everything has to be traceable from requirement to test, how there are mandatory reviews before any code change that need to be documented, how there are qualification criteria for the toolchain, etc. All these things cost a lot of time and manpower, with arguably very bad cost-benefit analysis, which is only really worth it when human lives are at stake.
Absolutely. That's part of what I was getting at by mentioning all of this exists on a continuum. We don't need to, and really shouldn't, treat a SaaS startup exactly the same as a military aviation project.
We can, however, draw from the lessons learned on those safety critical projects and use parts of the process that make sense for the nature of whatever we're actually working on.
You're right that in general I suspect that comes down to strong static typing, particularly for the sorts of projects common to the HN crowd. When dealing with very large enterprise projects the balance might start to shift to more than just typing, though it would probably take a lot of real-word data that nobody is keen to supply to figure out where the tipping points are.
And I'd argue about how well Rust actually helps with these things, but that would really be going off the rails. Unfortunately.
I don't think we can confidently say that chemical and biological weapons have seen almost zero development in the last 100 years. We can say as far as we know that's the case, but we also don't have a very compelling reason for knowing.
The development of these types of weapons is going to be highly secretive, and not just because of the Geneva convention. Absolutely nothing has gone on in the world in the last 100 years that might tempt the deployment of advanced chemical or biological weapons on any sort of scale. Naturally we haven't seen them used.
I also don't see how burying our head in the sand is going to help on this one. At least compared to nuclear weapons, projects on these weapons could be useful for defensive purposes. Everybody is working more or less within the same confines and rules, and I wouldn't be surprised if relatively similar developments were the result. Even if they aren't, the characterization of these weapons can be used to inform and guide the response plans to try to minimize damage in the case of an attack.
I have no doubt that biological weapons have been developed to some degree, somewhere, in a well-funded government lab, in complete secrecy. But that is not a problem. This secrecy is what saves us from a race to the bottom. Some government will develop a limited capability of biological warfare. They probably already have. With no information about what their adversaries are doing, they will simply stop. As far as they know, they have developed a state-of-the art biological weapon system, and they are probably correct.
One of the reasons that the nuclear arms race happened, was the "openness" of the competition. You can't hide a nuclear explosion. Your adversary knows how sophisticated you are, and they now have to push 1 step further.
Biological warfare doesn't have to be like this. Be sure that some Darpa or DoD funded secret project is working on countermeasures, but there's another interesting thing:
You can stop biological attacks by quarantine. It's very simple, very effective, and doesn't require developing Armageddon-tier weapons in the process. Another issue that's purposefully isn't discussed in the video is how effective the delivery of these weapons are. Simply put, not very. Viruses, the main attack vector, change in every iteration. There's no guarantee you can infect that many people with an intact version of your weapon. Sooner than later, you genocidal weapon will stop being so selective, because it's evolving for it's own benefit, not yours.
And finally, if you really are concerned about this issue, as you should, the right way to fight against it is to find a way to stop the infection and proliferation, to find ways to stop these attacks without accelerating weapons development.
There are ways to fight these weapons without building them, and you can definitely do better than starting a public campaign that asks people to develop horrific bioweapons just so we can find a way to stop them later on, maybe.
I don't think it's fair to say that development programs will just stop after a reasonable advancement on the last known state of the art.
If you (a nation) are working on this, it's fair to assume your peers are as well. If you have improved on the state of the art, it's safe to assume your peers are in a similar position or will soon be.
By making advancements, all you are doing is proving that other nations with a similar level of technical sophistication can do the same. Even if you make strong assumptions that you are indeed the best, you can't assume other nations will never reach where you are now. Maybe you have 5 years on them, because you are clearly superior? Or maybe you take a more conservative stance and assume you're behind - just in case.
Furthermore, I don't think quarantine would be an effective response to an intentional biological attack. Even just quarantining say, New York City, would be a nearly impossible task. And since this is an attack, why wouldn't all major cities be targeted? There would be no way to contain it physically.
Even if you think pure quarantine is the way to go, there is a lot of useful information that can only be gained by doing the weaponization research. What sort of incubation times could show up? How virulent, etc. Knowing these sorts of things would really improve the quarantine situation. There also isn't really a good way to know without doing the research. It doesn't mean it has to be packaged into a weapon, but the hard part is all done.
I'm not sure why this argument shows so often (and it's not even limited to religion), but it's an incredibly bad one. The positions here are not a ternary matter. The options are not limited to 100% one way, 100% the other, or right in the middle.
The 'most rational' position lies somewhere in that continuum. When converting to the usual trinary labels, one extreme or the other may end up being the most rational position. The center is not automatically best.
> I was a staunch atheist before psychedelics, now the notion of a purely material universe seems naive.
Continuing to try to avoid the religion topic directly, I'd caution you against reading too much into your trips, which your excerpt oddly points out as well. Our brains are rather questionable devices for perceiving the world as it really is even when they are functioning properly. There is no reason to believe intentionally screwing with it's functioning would somehow increase it's ability to perceive the world truthfully.
I can see potential value in using trips to improve lateral thinking, and perhaps even for insight into yourself, but in terms of exposing hidden secrets of the universe... There's no rational basis for that, and the most likely explanation seems to be the combination of the intentional misfiring and the usual imperfect operation of our brains.