Hacker Newsnew | past | comments | ask | show | jobs | submit | jcreedon's commentslogin

> Most of the lights are going toward distributions like Ubuntu, Mint, Manjaro.. And those shiny new distributions.

OpenSUSE I don't think really compares against any of those distros. SUSE Linux and OpenSUSE really compare better agains RHEL/CentOS. I think the biggest reason that SUSE lags behind most other distros has less to do with other distros being "shiny" and more to do with strong network effects. I tend to use CentOS or Ubuntu LTS simply because I know that googling "centos 7 <problem I'm currently having>" tends to yield a lot more high quality resources than most other distros.


> "centos 7 <problem I'm currently having>" tends to yield a lot more high quality resources than most other distros.

And yet, most of the time the best resource I find is Arch's wiki ;)


The Arch wiki is fantastic. I've not heard an explanation as to why it is so much better than Ubuntu/Redhat. A community of tinkerers? Lack of hegemonic domination from Redhat/Canonical?


Probably because it's generic enough to be applicable to the broader Unix ecosystem. I refer to the Arch Wiki all the time even though my Linuxen are almost exclusively either Slackware or openSUSE.

The downside, though, is that some distros have specific tooling for solving a problem, and that tooling is often more appropriate than the information in the Arch Wiki (though the AW still helps if you want to understand what those tools are doing and why).


Though to be fair, Arch (applies for any extended OS wiki kind of) wiki is so extended because 'they' had a lot of (user) issues.

I used to use arch quite a bit, even had my own spinoff but some fundamental package changes made by the core devs for critical services (network and the switch to systemd) got very annoying and simply too unreliable with update rounds to run any production stuff on it.


Maybe SUSE users don't have as many problems to search for?

Those users might also be utilizing the OS differently than you.


I'm not sure how this will work out as far as long term strategy goes, but in the near time I am very annoyed. It seems like they can't decide what to do with Hangouts. Vacillating on core functionality like this every other year only serves to frustrate end users.


You should probably qualify that with a Andy-Weir-is-not-a-lawyer disclaimer. There are a few space treaties that probably supersede some elements of "maritime" law.


Using this[1] humorous but still mostly effective methodology, that model seems to be okay[2], at least for now.

[1] https://twitter.com/todb/status/648956328292057088

[2] http://www.rapid7.com/db/search?utf8=%E2%9C%93&q=SB6141&t=a


I like the idea of search metasploit and I'll be making use of it, but the models mentioned in the article aren't in metasploit's database yet either. So while it is a good step, it is not very conclusive.


The biggest shortcoming I see compared to the other big players (AWS, Azure, Google), and it is something they don't mention, is that they only have one datacenter, compared to the several from the other big players. The pricing is quite incredible though. I suspect if enough people hop on board with this they will probably look into setting up another datacenter.


Disclaimer: I work at Backblaze. We do mention that we only have one datacenter! We're very transparent, we also tell you that it is 17+3 Reed-Solomon error correction across 20 separate machines in 20 separate locations inside that one datacenter.

We are already looking for another datacenter, but mostly because we're running out of space in the current one due to our traditional business (online backup) doing so well.


Something to note: Unless you're storing data in us-east-1, all other regions in AWS are "one datacenter". Yes, they have AZs, but those aren't datacenters, they're just compartmentalized segments of the same datacenter.

So! If you can tolerate the loss of a datacenter, store in Blackblaze. If you need geo-redundancy until Backblaze can offer it? Store in us-east-1 (which is geo-redundant between Virginia and Oregon).


This is incorrect on almost all points.

All AWS AZs are physically separated facilities with redundancy on all their infrastructure, although they're obviously in the same general area.

us-east-1 is not geo-redundant. It is entirely on the east-coast, as the name suggests. Although S3 does have geo-redundancy in all regions.

You may have been thinking of "US Standard", but it is the same as "us-east-1".

http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_r...


> This is a feature for EU and US-West. US Standard is bi-coastal and doesn’t have read-after-write consistency.

Quote from Jeff Barr @ AWS: http://shlomoswidler.com/2009/12/read-after-write-consistenc....


Their documentation is really inconsistent on this...


I've emailed Jeff to get clarification on it.


You did? I didn't see it...


From my gmail account :/ I'll resend!


Do you have a source that all other regions are "one datacenter"?

All I could find on the S3 FAQ says "your objects are redundantly stored on multiple devices across multiple facilities." which seems to contradict the "one datacenter" claim.

Also, do you have a source that us-east-1 is geo-redundant between Virginia and Oregon? That was not my understanding of how it worked.


AWS considers multiple facilities to be separate AZs in the same region. If you want multi region durability (besides us-east-1), you need cross region replication enabled (from the same FAQ you read).

"You specify a region when you create your Amazon S3 bucket. Within that region, your objects are redundantly stored on multiple devices across multiple facilities. Please refer to Regional Products and Services for details of Amazon S3 service availability by region"

Note, "within that region". Separate AZs, same geographic location.

"CRR is an Amazon S3 feature that automatically replicates data across AWS regions. With CRR, every object uploaded to an S3 bucket is automatically replicated to a destination bucket in a different AWS region that you choose. You can use CRR to provide lower-latency data access in different geographic regions. CRR can also help if you have a compliance requirement to store copies of data hundreds of miles apart."

This post http://shlomoswidler.com/2009/12/read-after-write-consistenc... has a quote from Jeff Barr at AWS indicating that us-east-1 is bicoastal, which is also why its eventually consistent, instead of immediately after a write (EDIT: it appears this constraint no longer applies to the US standard region).


I'm familiar with CRR.

I asked for sources about your "one datacenter" claim. Just because several facilities are in the same geographic region does not mean they are the same datacenter.

Just because something is bicoastal does not mean your data is replicated on both coasts. It could also mean that your data is stored on either the west or the east coast.

I would have trouble believing they store twice the data as their other regions but charge the same (actually a bit less!).


Multiple facilities != "one datacenter"


Like I asked below, do you have a citation?


"No one data center serves two availability zones" :

http://www.theregister.co.uk/2015/04/16/aws_data_centre_arch...


From your link:

"To solve latency, Amazon built Availability Zones on groups of tightly coupled data centres. Each data centre in a Zone is less than 25 microseconds away from its sibling and packs 102Tbps of networking."

25 microseconds at the speed of light (best case, through a vacuum; through fiber is significantly slower) is ~4.7 miles, and based on the quote, that is the furthest they are apart. If your buildings are within 1-2 miles of each other, they're essentially the same facility.

That is not geographically redundant.


Sure, it's not geographically redundant, but nobody in this thread claimed it was. DinkyG disputed that your "one datacenter" claim was false, which it appears to be.


[flagged]


Given that they created the account to comment in the DynamoDB thread, I'd guess they're a DynamoDB developer, but that doesn't invalidate anything they've said in this thread -- they even provided a 3rd party source.


> If you need geo-redundancy until Backblaze can offer it? Store in us-east-1

Or store it in Amazon AND store another copy in Backblaze. This isn't necessarily an "either/or" question. Having two copies with two different vendors in two separate regions is probably more reliable than having two copies inside the same vendor. For example, if Amazon has a large outage that affects both your regions, you can still access the copy in Backblaze.


If you're going to pick two providers, use Backblaze and Google. Google's Nearline Storage is still more reliable (AWS only offers a 98% SLA on a monthly basis for S3 IA storage class) and cheaper (if I recall properly) than AWS' Intermediate Availability offering.


This is incorrect. If you look for news articles about Amazon constructing data centers or buying facilities you'll notice that they have multiple data center facilities in each region.


Have a citation? My data is directly from AWS docs and speaking with AWS staff.


The docs you quoted specifically call out "multiple facilities".


what about Ireland? Amazon have 7 active Datacenters, 1 being build and 1 in planning...


I'm not sure about CA specifically, but some states only do full inspections on newer vehicles every other year. e.g. In a particular year the odd year models get the full inspection and the even year models just get a quick inspection and then vice versa in the next year.


ca doesn't even do smog checks on newer vehicles (i believe 5 years, not sure though).

i haven't done one in over a decade because i lease my cars as a business expense.

also, ca has 'test-only' smog centers which do not perform repairs - if you fail there, you have to get your car tested and repaired somewhere else.

needless to say ca highly incentivizes you to buy new clean cars. it quickly becomes very expensive to fix an old, polluting car under this regime. imagine how fast the bills can pile up if the first or second fix doesn't work. with new econo class cars available for ~$100/month on finance or lease, it doesn't really make sense to keep an old clunker around unless it's a collectible.


The 6-year exemption for new vehicles does not apply to diesels. Diesel cars are required to get smogged every two years.


incentivizes you to buy and old clunker ( pre emissions ) and keep it running forever... or a truck...


Interesting to note is that 1997-older diesel trucks/cars did not and still do not require any smog testing at all.

"Currently, smog inspections are required for all vehicles except diesel powered vehicles 1997 year model and older or with a Gross Vehicle Weight (GVWR) of more than 14,000 lbs, electric, natural gas powered vehicles over 14,000 lbs, motorcycles, trailers, or gasoline powered vehicles 1975 and older."[0]

[0] https://www.dmv.ca.gov/portal/dmv/detail/vr/smogfaq


Georgia doesn't even test diesel cars, others only if over three or four years old.


And only in the metro ATL area. The rest of the state doesn't do emissions testing.


Wisconsin doesn't test any cars older than 1996, because they would need dynos for it. Much cheaper for them to just only test ODBII cars.


This is not to mention that even if it were open source, it is not likely that the tool chain used to create it is even available, let alone open source. Firmware blobs are just a fact of life, for now at least.


Intel has licensed GPU designs from third-parties before, but typically only for their low-power (Atom) chips where they just didn't have the technology in-house.

Here, we're talking about a new micro-architecture for Intel's premium product line; I would be very surprised to hear Intel licensed anything in the design from third-parties. If Intel wanted their tool-chain to be available, they could make it so.


And i don't think it will become less common.

You can get a product out the door much faster if it use a off the shelf microcontroller and a EEPROM than a custom IC.


From somewhat old first hand knowledge. The GuC isn't that complicated of a toolchain, mostly linux based.

The GuC itself is basically a pentium processor running a very simple OS.


I think the biggest distinguishing feature of this is being able to have it encrypt emails with customer provided keys stored on their Key Management Service. This hypothetically should prevent three letter agencies from accessing emails, but I'm not sure that is the top feature on everyone's mind when they are looking to set up email for their company. It definitely piques my interest though.


If your mail is encrypted, how do you search it?

EDIT:

That is, assuming the mail is stored on the server and it's encrypted, how do you search it efficiently?

It does not seem efficient to download every byte of mail, decrypt it, and search it on your local machine (especially a phone). Perhaps you could build an index locally, but could you keep it updated? And even that requires downloading and reading every byte at least once.

This is something I've always wondered about encrypting hosted email.


The actual content is encrypted, but one can still build an index that points to individual email IDs and score the search results properly. Only when returning the top N results that one needs to decrypt those N emails with the right keys. The index would be kept in the server. Of course, the devil is in the details and things like email threading, order by by date or group by senders will make or break the user experience.


A full text index that's actually useful will allow you to largely piece back together the original content, modulo stemming and stopwords.

I guess it would be something like encrypting the index, then decrypt it on demand, just like you would decrypt individual messages on demand.


Not if the index values are encrypted (public-key) too.

hashed-word => encrypted-list-of-msg-indices

something like that.


As other commenters elsewhere in the thread have pointed out, we don't know much about the implementation at this point. How it is implemented I think will make or break this product.


Amazon's size basically guarantees that if they offer such a service, there will ALSO be a backup copy encrypted with Three-Letter-Agency key.

You can go under the radar when you are LavaBit small (and then, only until you have a single high-profile user). But not when you are Amazon.


In the beginning I don't think so. We've seen recently with Apple and Google, that there is such thing as a non-backdoored encryption product that agencies (both US and abroad) get upset about. That doesn't preclude there from being a backdoor requirement in the future, similar to a wiretap law.


> We've seen recently with Apple and Google, that there is such thing as a non-backdoored encryption product that agencies (both US and abroad) get upset about.

Who says they're not pretending to get upset about them?


Amazon is the company that cut off WikiLeaks when a senator made a hostile speech. Google and Microsoft have both gone to court to resist government actions.


We're talking about a company that has CIA as its customer and censored Wikileaks with a single phone call from a senator.


I think since the Sony leak, probably a lot of people have thought about the security of their corporate mail - I have


That's a good thought.

The more I have sat and thought about it, the more use cases I can come up with where there is a business case for it. One big one that comes to mind is foreign companies that don't trust the US.


In the US that is called "discovery".


I'm kind of disappointed they are trying to sell this before they have actually gotten a free BIOS and firmware for those locked down pieces. Without those, this laptop is no different than any other laptop that I can buy and load with free open-source software.

Probably the closest I've seen to a truly open source and free laptop is Bunnie's Novena.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: