If you're looking for more feedback: I got a homepage that was very clear, and immediately intrigued me. I clicked "Try now." That took me to a signup form at which point I immediately closed the window.
You should try to make some kind of demo available for the user without having to do anything except click once.
Hi there, Co-Founder/Designer here. Thanks for that feedback. The iPhone right in the center of the page is an actual demo of an app using our service. You can interact with that straight away as you land on the page. In this case I will take that feedback on board and work to make that much clearer in the future with new designs. Much appreciated!
Hi Ronald. It's mostly for context. We enable 'try before you buy' for mobile apps, in which case hinting to the user that this is an iOS app increases the likelihood that that user will download the real app. (We actually have two core use cases: early beta testing, and marketing apps). Point taken though; thanks for the feedback.
Will WalMart fail but be replaced by manufacturers selling direct to end-users and disintermediate Amazon? I have no idea, but it's no less plausible than Amazon driving absolutely everyone out of business.
Seems unlikely, because the main reason to go to Walmart or Best Buy is to get the product the same day you want it, and also to see it in person before buying it. Amazon won't drive everyone else out of business until they offer both of those.
The expansion into same day shipping is happening. In terms if people who want to physically see the product, I don't think that is Amazons game. They are are better off giving up that market segment and the overheads associated with it. Same day shipping and no hassle returns is probably close enough for a lot of people.
You box up the item using the original packaging, slap on a return label printed off Amazon's website, and drop it off at UPS (a box or a store). Item arrives back at Amazon, and your account is credited.
I've only done it once. It was a defective milligram scale costing in the low teen 2-digits, which arrived DOA. Taking a chance, I resubmitted my original order and dropped off the defective unit at the local UPS store. A couple of days later, I had a working scale and an account credited to reflect the purchase of one scale, net.
I don't live in the US and have never made an Amazon return. I was under the impression though that they organised for a courier to picking the package from you, rather than you having to go and send it?
I have had UPS pick things up from house for an Amazon return before. I'm not sure whether it happens every time or not, or how that is decided. It certainly happens at least some of the time though. Really fantastic.
I've never had a return challenged or refused, and I can immediately print out a shipping label. It's not entirely "no hassle", but it's the least I've ever had, and the only reason I've ever purchased "risky" merchandise (e.g., clothing) online to begin with.
(Forgive a 5-year-old memory of one of many cases -- I probably have the numbers wrong) It went something like this: The director of engineering approved a log retention plan that kept access logs for 7 days or something. They wanted to reduce costs and issues with log files were the top reasons for getting called to support the service. The government needed to demonstrate that someone had accessed the service 14 days ago, and the government could not understand why the 'minimum' of 30-day access logs were not present. I think something else was missing, too. There was a back-and-forth, and since the company couldn't produce the logs as requested the government got a contempt of court with the understanding that the director would be demoted to an IC and not be anywhere near the production service. I think the company lawyers agreed to the conditions to make a worse outcome go away.
If it's not clear, there were strong personalities involved. One way to tell the story is the director went out of his way to poke a bear and got mauled. Another way to tell the story is that a bear went walking down main street looking for trouble ("How do we know you didn't change the retention policy to protect the individual?"). In both cases the guy lost his hand and the bear is still loose.
Is there a legal precedent for minimum time that logs must be kept, say for an email service or messaging service? I'm talking about US policy, if that makes it more clear.
Generally speaking unless you are specifically required to keep records for a regulatory purpose (i.e. tax), you don't have to keep logs at all. Lavabit used to keep logs for a limited time (I think a week?).
More concerning are key disclosure laws [1] and their crazy penalties that seem to be creeping in all over the world.
You also need to take reasonable measures to preserve relevant data when you have reasonable cause to suspect that litigation or an investigation will begin.
Not having a policy can hurt you. If you have no deletion/retention policy, and happen to destroy data for some random reason when a litigation begins, you or your company may be in trouble.
Note: IANAL, and different industries or data categories have specific legal requirements or best practices for retaining things.
It was likely agreed on (possibly via contract) to meet the compliance policy of the government agency. So I could see breach of contract. I don't know about legal precedent for logs per se, but there is precedent for retention of other files. For instance HIPAA involves some well known regulations around keeping and destroying medical data.
Don't be so bleak. If you're going to do something that will get the attention of any government, here's a simple rule to follow. Don't use 3rd parties. And if you must, do it in a way that can never be traced back to you in the "real world". It isn't hard and it isn't even illegal.
Tor is funded directly by the US govt. Something like 60-80% of their annual million-dollar budget.
Maybe being superstitious isn't helpful, but in this instance, I'm not so sure Tor is able to be relied on.
A proxy server won't work for the same reason a personal server wouldn't work. It's tied to you eventually, either through a paper trail or a packet trail.
"Maybe being superstitious isn't helpful, but in this instance, I'm not so sure Tor is able to be relied on."
Tor has been analyzed extensively by cryptographers and security researchers; there is literally a mountain of published research about it. It is operated by an independent organization. I would be more cautious about the Linux kernel, a vastly larger codebase that could and probably does have numerous back doors, than about Tor.
"A proxy server won't work for the same reason a personal server wouldn't work. It's tied to you eventually, either through a paper trail or a packet trail."
Which is why it is below anonymous remailers and Tor on my list. Proxy servers are better than nothing at all.
Tor has been analyzed extensively by cryptographers and security researchers; there is literally a mountain of published research about it.
And that research says that if an entry node and an exit node are both under control of an adversary, then that adversary can deannonymize the target.
I don't know enough about it, but I know that deannonymizing someone is a matter of resources, not a matter of ability. And the USG has a lot of resources.
Much in the same way that communicating insecurely with 90% of your contacts is not going to help PGP keep your emails encrypted, requesting sites from outside of the tor network is not going to help keep your internet usage anonymous. It's a problem of behavior and adoption rate
graycat, would you mind putting an email in your "about" section of your profile so I can contact you? I have some questions about trouble I'm running into in regards to some of the examples you presented.
You first, and I'll read your "about"
and write you. For now I'm trying to
remain anonymous on HN. I may
not be able to help you much in
academics since I quit being a
college prof long ago and am
now a solo founder of an information
technology startup.
One idea for jumpstarting a new HN-type site is to spider HNSearch, gathering the first 100,000 stories ever submitted to HN, along with comments. Then set up your site so that your frontpage is a doppelganger of HN's frontpage circa 2007. I.e. today your frontpage should look how the HN frontpage looked on August 7th, 2007.
That way there's (a) the appearance of activity, (b) a constant stream of interesting content on the frontpage, and (c) interesting discussion in the comments. Before long, new real users would start to participate, e.g. by replying to doppleganger comments. At that point, it's inevitable that the new site would start to get traction as long as those new users keep coming back, which they should because the frontpage is interesting.
This could only work if someone had the balls to actually deploy the currently-released Arc 3.1 version of Hacker News, though, rather than rolling their own version in Rails. There's nothing inherently wrong with trying to clone HN's featureset, but it's interesting to note that not a single one of the HN knockoffs successfully cloned HN's entire featureset. Most of them were a halfway implementation.
It contains a snapshot of the first 172,575 items (submissions/comments) and a snapshot of the profiles of the first 6,519 users.
Have fun! Maybe someone can use the data to put together a cool visualization or something.
EDIT2: Just to be clear, this idea is firmly tongue-in-cheek.
EDIT3: Statistics time! According to that snapshot, when HN was 558 days old there were 38,693 submissions and 133,882 comments. The snapshot claims there were only 6,519 users. That would be an average of 20 comments per user and 5.9 submissions per user.
1. None of the HN knockoffs can really clone Hacker News because, for one thing, it's missing everything in the original HN's code. pg mentions that there's a missing "secret sauce" that ties it all together. That, and you really need to have some sort of heavy involvement in the community before you try to start it, so you'll need someone of pg-level importance to start it. Specializations could be started by very knowledgable HN users in their field but very general implementations would need an all around "ideologue." EDIT: I don't know if you're already aware of this, I'm just explaining because the way you wrote it appears as though you may think the Arc code is packaged with a full implementation of HN.
2. I don't think this would be allowed via the HN Search API...that's a lot of content.
3. This is a cool idea, but how would you get people to participate and add content to what is essentially a ghost, rather than just come back here, where there's news? I also don't think it solves the problem HN is currently facing. It merely puts a bandaid on it until we get back up to 2013 level volume.
I would guess the secret sauce is spam filtering code. It seems conspicuously absent from Arc releases. The Arc code does come bundled with the HN source code, though, and all of the core features seem to work fine. There's nothing missing except some way of filtering spam.
I think a HN clone might get better traction not necessarily focusing on hackers/programmers, who likely are already perfectly happy being here. Maybe make one for all the stories and things that HN users don't want to see, or for different communities altogether.
The point for someone doing this would presumably be to hope the visitors in question would not notice that they're adding content to the ghost/doppelganger comments.
I'm not sure why you think there would be (a) the appearance of activity, (b) a constant stream of interesting content on the frontpage, and (c) interesting discussion in the comments on August 7th, 2007.
Also, HNSearch's API won't allow you to gather that many stories.
The best way to do this would be to only take stories with years in the title. These are stories which were old during their original submission, but evidently timeless.
Does anyone know if there has been progress in volatile memory tech? I'm hoping for 1TB RAM chips with 10x faster access times than current RAM. It will enable PC gaming to deliver unmatched immersive experiences, among other applications.
Especially if this can be accessed as VRAM directly and just as fast by the GPU. Infinite-detail fractal-resolution destructable-animatable voxel terrains come to mind (not the Minecraft kind of "voxel", mind)..
The thing is at this point a lot of latency comes from the fact that each dimm is physically displaced from the cpu.
When you are on a timescale of nanoseconds even electricity's speed can be slow when being compared to something like intel's l4 cache which is on-die.
For any type of volatile memory that latency will exist until mobo designers move the ram closer to the cpu or adopt optical interfaces between parts.
For some cool calculations that can put things in perspective take the speed of light as 299 792 458 m/s and take the time of a nanosecond as 10^-9 seconds to get ~0.3 m/ns for light. That means that for every third of a meter the dimms are away from the cpu means a constant 1ns delay in terms of latency.
Not strictly true. Most algorithm choices in gaming can make trade offs between CPU and RAM. If you increase available RAM, you can usually use algorithms that are more RAM hungry and less CPU hungry for large speed benefits.
In 1 TB of RAM you can keep without any compression 3d array 1000m * 1000m * 64m of voxels with 1 voxel being a cube 2 cm * 2 cm * 2 cm. And you can lookup it randomly with negligible latency and do real time raytracing on it.
If that won't change games I don't know what can.
Besides GPUs will obviously also use this technology if it really works.
I don't know what is special about emulating a ps3 on a PC, most mainstream PCs and GPUs are faster than the Power-PC based cell processor in the ps3 (an 8 year old console). Even the ps4 does not contain any better graphics processing capabilities than a recent relatively high end PC.
The only reason to prefer the GPU is because it confers an advantage over the traditional CPU+RAM combination. There's nothing inherently special about a modern GPU. The GPU is a sequence of actions and abilities encoded into hardware, e.g. the ability to automatically perform various kinds of texture filtering transparently to the game developer.
Since the GPU is hardware, and since hardware is less flexible than software, a graphics programmer would always prefer a software-based pipeline to a hardware-based one. The reason hardware pipelines are preferred is strictly because their advantages outweigh their disadvantages. Typically, using a GPU enables graphics programmers to create renderers which are 10-100x more efficient than software-based renderers, so the added flexibility of a software rasterizer tends to be forgotten in the face of massive efficiency enabled by the GPU.
The GPU primarily became popular because (a) it offloaded part of the computation from the CPU to dedicated hardware, freeing up the CPU for other tasks like game logic, AI, and more recently physics computations (though nVidia is trying hard to convince developers that hardware-accelerated physics is a viable concept), (b) GPUs increased the amount of available memory, and (c) GPUs dramatically increased the throughput (memory operations per second) of graphics memory.
Memory latency plays a key role in many modern graphics algorithms, such as voxel-based renderers. It's often the case that an algorithm needs to repeatedly cast rays against a voxel structure until hitting some kind of geometry. Therefore, within an individual pixel of the screen to be rendered, this type of algorithm can be hard to parallelize because typically the raycasting can't be broken up into parallelizable steps. It typically looks like, "While not hit: traceAlongRay();" for each pixel, each frame. I.e. this algorithm can only trace one section of the ray at a time before tracing the next.
That raycasting algorithm is memory-latency-bound because it completes only when it finishes looking up enough memory locations that it detects the ray has intersected some 3D geometry. In other words, by reducing memory latency by 2x, and assuming memory bandwidth is sufficient, then this algorithm will complete twice as fast. This means instead of 24 frames per second, you might get 48 frames per second.
So, all that said, if it becomes common to have 1TB of regular RAM with the latency and bandwidth traditionally offered by GPUs, along with a surplus of available CPU cores to offload computations to, then software renderers will once again become preferable to GPU renderers. A software pipeline will always be more flexible and easier to maintain than a hardware pipeline, simply because the featureset of the software pipeline isn't restricted to the capabilities of the videocard hardware it's executing on. It's also easier to debug and maintain.
All of that means that it'll be easier for art pipelines to produce more complex, more immersive visual experiences than at present. But replacing the traditional GPU-based renderer with a CPU-based software renderer will only be practical if there's a major advance of RAM technology in the future, because current RAM tech can't match the memory bandwidth / latency of a modern GPU. Hence, any major developments in the area of volatile memory tech will be extremely interesting to graphics programmers.
And a large part of what makes a better GPU is RAM bandwidth. If you're using integrated graphics there actually tends to be a quite large difference between using system RAM clocked at 1066 and RAM clocked at 1866. If you're using a discrete GPU card the RAM that makes a difference to your gaming performance is already soldered onto the card, but that might still see an improvement from faster memory technologies since the people who make those cards could use it.
If you're using integrated graphics there isn't any PCI bus involved. Even when graphics was off-die it was on the Southbridge.
And the bandwidth between the GPU and the GDDR on the graphics card doesn't have anything to do with the PCI bus either, except to a small extent when synchronizing with the CPU or initially loading textures or whatever.
Apart from the cutting-edge APUs (like AMD Kaveri implementing hUMA), the memory of integrated GPUs and previous generation APUs was separate and copying chunks of it in between happened via the bus.
Plus since we're talking about cutting-edge gaming, integrated graphics is irrelevant (and so are APUs).
You should restate the main point of the article and then explain why the top comment is a quibble about an unrelated detail. If the readers aren't as knowledgeable as yourself, then you have an opportunity to change that rather than complain about it.
I'm not qualified to discuss the main point, but enough to separate chaff from wheat in this case.
The article is about quantum cryptography. The author gives credit card processing as a layman example of encrypted communication, and states, erroneously, that the connection process is slow because of the SSL handshake.
It is true that SSL increases latency, but it is not the main factor, as harshreality said.
However, the example is an aside, and the error is a small aside in the aside. Not really worth discussing, and it definitely doesn't have its place at the top of the thread, where it was when I first posted. It is still currently the second comment.
Note that harshreality improved his post after the fact... or did I miss the good bits when I first read it? Hoist with my own petard? If so, sincere appologies.
Nothing else in the article was particularly worth discussing either; other comments are detailing how this is not a breakthrough, or making minor clarifications, with beloch's in particular highlighting an alternative that might actually have interesting applications.
I think this type of thread is what tptacek meant by "these threads [that question whether CP is a big deal] are always repellant." I must say, this one certainly is.
I encourage everyone to chill out, leave your emotions at the door, and give the topic a thorough and dispassionate treatment.
"I'm fucking tired of X" is an unreasonable way to conduct ourselves. It's a sure way not to change anyone's opinion.
You should try to make some kind of demo available for the user without having to do anything except click once.