Hacker News new | past | comments | ask | show | jobs | submit | more throwasehasdwi's comments login

This does NOT bode well for the future of humanity. It seems that war is only war when people on your side are dying. Once everything is sufficiently automated it will be possible to wage war without risking any of your humans. I don't want to know where that leads


> Once everything is sufficiently automated it will be possible to wage war without risking any of your humans.

i don't see how that could possibly work. just because people won't die directly from a weapon anymore doesn't mean they aren't negatively impacted in this hypothetical scenario. i'm thinking of attacks crippling key infrastructure which could lead to large scale supply shortages


Are you implying that Rust should be the first programming language of these minority groups? Given the complexity of low level programming that doesn't sound like a good idea. If that is not your plan then this is a pointless exercise. The cohort of existing programmers is highly biased in these same ways. The bigger Rust gets the more difficult it will be to maintain anything far from the average. If rust becomes as popular as say Java or C++ it will be statistically impossible to maintain anything but the average within the community.


> Are you implying that Rust should be the first programming language of these minority groups?

No.


Did you read the rest? I appreciate a reply from the core team but you must understand your potential audience of Rust users is a highly gender/racial biased cohort. If Rust becomes popular it will become near impossible to deviate from the mean unless you engage in affirmative action by selectively removing community members in the majority.

Edit: I'm not against these kinds of initiatives at all, I just don't think it's realistic to apply such goals to a free-form group of programming language users.


> Did you read the rest?

I did, but I disagree. You're making a large assumption, which is that the group will follow the demographic average. This is not true; individual programming languages' demographics differ from the overall population.

And there's certainly no desire to "remove" anyone, selectively or not. It's about growing the pie, not placing artificial limits.

Also, this initiative is not about people new to programming overall, it's about experienced people who may not know Rust or haven't found a way to get involved with Rust as a project.


The beginning of the end for Intel. Microsoft is clearly going to pivot win10 into the mobile market and leave Intel in the dust.


Yes... or at least it could be.

Intel must have been seeing this coming for quite a while. Even to a casual viewer from the outside it's been pretty clear for years.

I wonder what their response has been and if it has a chance of succeeding.

I am assuming we haven't seen their response yet. If we have, then they are in a bad place.

If you want to be an optimist in Intel's favor, perhaps recent chip delays and stagnation is due to them pulling resources away from incremental improvements to dead-end architectures and putting them on to The Next Big Thing in CPUs, which we'll all find out about soon... ha, ha.


Intel has responded in a way: by threatening litigation [1]. How this plays out will be known soon.

[1] "Intel fires warning shots at Microsoft, claims x86 emulation is a patent minefield" [ https://arstechnica.com/information-technology/2017/06/intel... ]


This is nowhere near the end of Intel, performance desktops and servers will still be running Intel processors for many years to come, ARM simply cannot match the performance - despite how impressive that demo video might look.

Intel was already dead in the water for mobile so I doubt they're too bothered.


No doubt we're talking about years.

But, thinking about servers, for years now the default has been to build software systems that scale horizontally (by adding more computing cores) rather than vertically (by using faster cores).

In such systems, you may find, e.g., that 20 slower cores performs about as well as 10 faster ones.

In that case, you start looking at other factors. A big one is power efficiency. You have to pay to pull electricity into your servers and pay to get the heat away from them. This might be ARM's big advantage.

Now, in general systems, a single core has to be able to achieve a certain level of performance to be considered at all. It has to have enough power that people are confident that it will support a wide range of potential applications before servers using them will be widely deployed. But it's arguable that we're already there now, or almost there.

I think the way this is going to go down is AWS (and/or their competitors) will add an ARM option for EC2 and other compute services. It will be cheaper than the Intel option. People will try it and find it works fine for a wide range of uses. Then the dam will break and things will transition quickly.

Now, I'm kind of assuming here that ARM will continue to improve but will hit limits and will settle in to a place where it offers performance in the ballpark of Intel yet maintain significantly better power efficiency. If ARM somehow finds a way to surpass Intel then things will move more quickly and if it stagnates only a partial transition will occur.


So far ARM in servers has proven to be a disaster performance per watt-wise. Intel's Xeon D CPUs are not even the latest microarchitecture and yet they are wiping the floor with any other chip trying to get the performance per watt crown. Look up some Xeon D-1587 benchmarks (16 Broadwell cores at 1.7 GHz in a 65W envelope).

And since then Intel has picked up another 30% at least in performance/watt by going from Broadwell to Kaby Lake and soon Coffee Lake and so they can answer any challenger if they so want.

The reason there's no Skylake Xeon D is that Intel doesn't need it yet.


Microsoft has plans on moving some of its Azure services to run on ARM, among other things for Windows Server on ARM.[0]

> "We feel ARM servers represent a real opportunity and some Microsoft cloud services already have future deployment plans on ARM servers," he wrote ahead of the conference.

> "We have been running evaluations side by side with our production workloads and what we see is quite compelling.

> "The high Instruction Per Cycle (IPC) counts, high core and thread counts, the connectivity options and the integration that we see across the ARM ecosystem are very exciting and continues to improve."

[0] http://www.techrepublic.com/article/windows-server-on-arm-mi...


Again, the ARM benchmarks/demos recently are very impressive but there is no way they'll be able to match the higher end Xeon's, if for no other reason than you have to do x86 emulation.

I think they'll definitely be able to take some market share from Intel at the lower-end, but they won't be going anywhere any time soon.


AMD K12 is still slated for release in 2017 somehow - they didn't discontinue it. It'll have Zen levels of performance if it's released. For the time being, only Apple cores are remotely near competitive in single-core performance.


Well they have't been any mention of K12 since 2015/2016. And the last Financial Analyst day didn't have have K12 on it at all.

Again it doesn't make any sense to put ARM on server. There just aren't any advantage yet. And the forseeable future.


Yeah, it's interesting that it wasn't officially dropped from the plans, even if no-one hears anything about it.



The iPad Pro is already neck and neck with the MacBook Pro: https://pbs.twimg.com/media/DCV4rQtW0AAe3R2.jpg


>> The iPad Pro is already neck and neck with the MacBook Pro

A nitpick with the use of "the" with Macbook Pro.

I think you need to qualify that as the 13" Macbook Pro. The 13" models use ultrabook class processors, while the 15" models use much faster quad core i7 processors. IMO, the 13" and 15" are very different beasts because of that.


You must be confusing the 13" MacBook Pro (i5-7267U, i5-7287U, i7-7567U) with the 13" MacBook Air (i5-5350U) or 12" MacBook. Only the latter has a true "ultrabook class" processor.


Call me a snob, but I consider anything suffixed with a U to be an ultrabook class processor, and anything with an M to be something worse. On the Windows side, those high end U CPUs are typically put in laptops that the manufacturers themselves call "ultrabooks".

Either way, a U processor doesn't really compare to the MQ/HQ suffixed CPUs in terms of perforamnce.


Yup, I'm going to call you a snob. ;) Purely because you're distinguishing based on a naming convention, rather than actual performance. But of course you're entitled to your opinion!

Checking the Geekbench comparisons, performance of the 13" and 15" MacBook Pros are mixed in together.

MacBook Pro (13-inch Mid 2017) Intel Core i5-7360U @ 2.3 GHz (2 cores): 4330 MacBook Pro (15-inch Mid 2017) Intel Core i7-7700HQ @ 2.8 GHz (4 cores): 4339

That's a negligible difference.

Of course multi-core changes the picture dramatically due to 2 vs 4 cores and the different TDPs, etc...

[1]: https://browser.primatelabs.com/mac-benchmarks


The multi-core matters more if you're going to call a laptop "pro", imo.


> Microsoft is clearly going to pivot win10 into the mobile market

They've just pivoted out of the mobile market by giving up on Windows phone.


And switched into the netbooks and hybrid laptops market, where Android has been a disaster, specially since developers only target phone screen sizes.

At most retail stores on my German city, the amount of Windows tablets/netbooks clearly outnumbers the few Android ones on display.

Even the Samsung models on display are all W10 ones.


The killed off their Raspberry Pi competitors as well.


In this case I would say the lack of details is a good thing. It must work so well that they can keep it a secret from major vendors. If it wasn't near native speed or had bad x86/x64 ISA support they would be giving a heads up to the bigger software shops.


From the 68K to PPC migration at Apple, I remember software spent much more time running system code than application code (listening for events, redrawing stuff, and so on). It's true PPC was much faster than 68K and that made the move easier, but, still, just not having to emulate anything beyond the system library calls is great.


It has no x64 support...


It hasn't because it's based on a Microsoft product which already existed - Virtual PC for PowerPC Macs.


Yeah homeless people are not the same people that could afford any type of home most of the time. They're overwhelmingly people with mental and/or drug issues that prevent them from holding a job. Much more likely is that SF has many factors that make it a nice place to be homeless so these people migrate to it.


I disagree. I think there are certainly people who need mental health/drug counseling but for those that seek it is has to be impossible to get on your feet. I can see a spiral where you seek help, get better, but then you can't afford a place to live. You can't find a job since you don't have an address or you do find a job but can't save fast enough to afford an apartment. So, you go back to living on the streets and you slide back down.


Do you actually see this spiral personally or do you envision it?


> They're overwhelmingly people with mental and/or drug issues that prevent them from holding a job

Citation needed.

Homeless people with drug problems or mental illness are the most likely to be visibly homeless, acting out, and/or harassing people so perhaps confirmation bias makes you believe they represent the majority? In reality a minority of homeless people (~35%) have drug, alcohol, or metal problems.

The majority of homeless people are depressed and ashamed of being homeless and trying desperately to get off the streets. They lose their job or get evicted and don't have any family they can turn to, causing a downward spiral. Even if you have the money to rent an apartment you're fighting people who can pre-pay a year's worth of rent (thanks to large signing bonuses).

I don't know the breakdown but it is undeniable that some portion of the homeless in the Bay Area are homeless as a direct result of housing being so unaffordable.

>Much more likely is that SF has many factors that make it a nice place to be homeless so these people migrate to it.

That's wrong; 70%+ were residents of SF when they became homeless. Only 10% are from outside California and most of them said they came here on promise of a job, to find family, etc.


I've heard some statistic that the non mentally ill homeless tend to get back on their feet in about a year on average in SF. The mentally ill chronic homeless that everyone can see need mental health services.


> In reality a minority of homeless people (~35%) have drug, alcohol, or metal problems.

Citation needed.

In reality, when people are complaining about homelessness, it's the chronic homeless, not temporarily homeless. 1/3 have drug problems and 2/3 have mental or physical disabilities. Those 1/3 and 2/3 pools don't perfectly overlap, so the amount of people who have neither drug nor disabilities is very small.

https://www.samhsa.gov/homelessness-housing


Inexpensive housing opens up options, for both the public and private sector, that don't exist where all the housing is eye wateringly expensive.


It's not unreasonable to say that people get labeled as having mental issues because of not having a job, which causes a feedback loop, etc.


Separating CSS always seemed stupid to me, I was so happy when it finally became "best practice" to have inline styles within isolated components. HTML is the markup and as far as I'm concerned the styling is part of that markup.

I don't understand how anyone ever thought CSS's style inheritance was a good idea. I don't mind HTML and even JS these days but CSS really needs to be sent straight back to hell where it came from.


One word: re-usability.


the components should be reusable, not the styles on them.

Besides global font settings CSS nowadays is mostly used for positioning and isn't reusable.


For components, bundling the CSS makes sense. For other use cases external CSS resources with inheritance are very beneficial.See for example Scalable and Modular Architecture for CSS or any modern CSS framework for lots of examples.


reusing css is a great idea for websites, react and Vue are most commonly used for web app where the atomic unit of reusability is the component.


What a wonderful way to add massive complexity to font rendering while delivering spectacularly little value.

Don't get me wrong, it's cool, I just don't see the point. And I don't look forward to my battery life being used for pointless sugar.


Best not to confuse this demo of their parametric font, with the idea of parametric fonts in the first place. This page tries to vary the font as you scroll etc., which is just a (suboptimal) way of interactively demonstrating the font parameters, but not the point.

For a better demo (that does not use the same tech), see http://www.metaflop.com/modulator -- you vary the parameters, then a fixed font is generated that you can download and use. (I mentioned this in another comment below: https://news.ycombinator.com/item?id=14604890)

As for the value: the entire Computer Modern family of typefaces (used in TeX/LaTeX and friends) was generated with METAFONT which embodies the idea of font generation through "pens, programs, and parameters" -- the regular, bold, italic (even typewriter) variants of the font, and at various point sizes, are generated from common font definitions. Similarly the shapes of the loops in say p and d, etc. This ensures consistency and lets you experiment. (Though to be honest, very few people have successfully designed good font families with this approach.)

The articles that I mentioned in the other comment are better at seeing what's the point.


That metaflop link is not only a great visual example of what makes this tech cool, but is also fun to use. The immediate feedback which shows input-output relations reminds me of fun had while messing with character-creation sliders in games like Elder Scrolls and Dark Souls.


> What a wonderful way to add massive complexity to font rendering while delivering spectacularly little value.

I'll attempt to explain the value, because I don't think this demo is doing a good job of showing that. parametric and variable fonts might seem like "pointless sugar," but it's the combination of mostly two things that make this a huge deal: responsive typography for better legibility, and reducing the number and size of font files served over the web.

let's say I have four font files being used on my website -- regular, bold, italic, bold italic. let's say that's 50k per file, so 200k for four network requests. with a variable font, it's maybe ~70k for a single request. that's a huge improvement, but it's not even all that these fonts offer.

responsive typography (adjusting for the right font weight and characteristics depending on the size of the display) is very important for legibility.[1] The slick ultrathin fonts that look good on a 27" 5k display are unreadable on a smaller display. fonts optimized for body text look terrible when used at large sizes, hence the existence of "display" typefaces. there is so much bloat in having tons of different files for this, when the libraries that interpolate fonts in font creation software[2] can be used on the fly instead of during a compilation step.

font rendering is cheap, sending fonts over the wire is not -- so when you frame this new font tech as something that is just as much about speeding up the web as it is about speeding up the design process, it's a little less pointless.

1. https://alistapart.com/blog/post/variable-fonts-for-responsi...

2. https://github.com/LettError/MutatorMath


You can look at it from two ways. It might save some network bandwidth when designers want multiple fonts, but is it worth adding even more complexity to browsers?

Does the average user really care about fonts? I would say definitely not. Most people don't even notice the font.


> Does the average user really care about fonts? I would say definitely not. Most people don't even notice the font.

most people don't think consciously about the font, but that doesn't mean they don't notice or that their experience will be the same reading in one font vs another. some companies spend a lot of money on this. facebook and google, for example, do a/b testing on fonts to see which perform best in ads.

but users also care when a font becomes illegible on the wrong size display, and/or wrong pixel density. the less pixel dense a display is, for example, the bolder a font needs to be. this new tech makes it super easy for developers to actually execute these best practices, eliminates multiple steps from the asset pipeline, reduces network requests, and reduces file size. check out the google fonts analytics page[1] for an idea of how many trillions of fonts served that this will affect.

1. https://fonts.google.com/analytics


Making use of the fonts on the users machine requires even less bandwidth while allowing the user to view things in whatever font they find easiest on their eyes. Including fonts on web sites has always been about designers getting their way. This is just more of the same.


Couldn't you say the same about images?

Besides, the OP clearly described the benefits in being able to adjust the font according to screen density. Using system fonts doesn't magically solve that issue.


> Couldn't you say the same about images?

I would, if the user's machine came with a preinstalled set of images that were a close match to the need. I don't think the web browser exposes the user's standard icon sets on the web, but it absolutely should.

> Besides, the OP clearly described the benefits in being able to adjust the font according to screen density. Using system fonts doesn't magically solve that issue.

Using any font made in the last 20 years solves that issue. Vector fonts scale to any screen density just fine.

On top of that, if you don't shove a webfont down the user's throat, a user can even pick their favorite font size and density to match their screen and viewing preferences.


Sure it does, let the client render the font based on what is best for the given user on the given device.

You're more or less describing DPI scaling, no?


Wow, what a cynical viewpoint for a very cool, useful technology that we've been begging for for quite a while. Maybe you're confusing the demo with practical applications, but this is a demo.


It's a shame such a negative, snarky comment got so many upvotes. You could express disinterest in this parametric font stuff without doing it in such a nasty way.


On the other hand, it allows for a clear, concise counterpoint to be upvoted and seen fairly quickly by those that shared that view, so they may see something to change their mind prior to expressing similar views.

Edit: To clarify, a counterpoint to the initial negative comment...


If you think that's complex, read the spec for the TrueType hinting VM sometime.


I found this justification particularly ridiculous: "Imagine shop windows that react according to the movements of passers-by."

Who would think this is compelling enough to put within the elevator pitch for the concept?


I think a better justification along the same lines would be: "Imagine shop windows that react according to the weather to always make themselves as easy to see through as possible, regardless of whether it's overcast or a sunny day."

Get your window in the desired dimensions, and let some automated protocol make sure it's as easy to see through as possible for all your potential customers.

Write your content, and let the setting in the OS or application automatically make sure it's as legible as possible regardless the size of screen, pixel density, and expected viewing distance.


That is really supposed to be a major selling point of a new font standard?

The point I was trying to get across is that the creators seem to have become intoxicated on their own fumes. That use is a crazy small niche, and if you really wanted to blow money on eye catching displays, there are any number of more creative and engaging ways to do so.

(I used to know a person who had the privilege of doing the windows for Barney's in LA, and another designer who did amazing creative interactive displays for the flagship Levi's store in Soho, London.)


> That use is a crazy small niche, and if you really wanted to blow money on eye catching displays, there are any number of more creative ways to do so.

I'm not sure if that's supposed to be a followup to the simile, but it sounds just as likely that you've missed the point I was trying to relay, so I'll elaborate.

The point is not to wow your customers, it's to make things clear. Different tactics work for that in different circumstances. A high density phone display, an older desktop display, and a 4K desktop display may all benefit from slightly different settings such as thickness in a font. Allowing the OS to optimize for functional legibility based on device and usage specifications is a good capability, in my opinion. That the demo allows you to see how the system works easily through dynamic changes does not mean that's the intended use.

As other have pointed out, just the ability to ship one font for your website and allow it to generate the correct variation based on responsive web design parameters is itself a win in the bandwidth it saves.


You don't see the point, but you think it delivers little value. You don't understand it, but you're going to assert as fact that it's unhelpful. I don't wish you luck with that perspective, it's ignorance followed promptly by pontificating, not merely contrary or skeptical.


"I don't see the point" is not the same as "I don't understand the point".


Pshaw, where's my raytraced font.


I also wasn't that impressed by the fact that I had to wait a few seconds for it to work. A font demo -especially a parametric font demo- really should be instant.


Same thing eh?


Uh no, Google is an advertising company. You know the terrible full page creepy ads taking over mobile that make the internet almost unusable and you can't block? Google is responsible for that.

With Google Jobs you're the product to be sold. Only this time it's not annoying ad's, they're offering your employment for sale, your life.


Google is doing all the opposite: punishing all websites using pop-up or full-page ads in search results.

https://www.theverge.com/2016/8/23/12610890/google-search-pu...


They're directly the reason that Chrome Mobile doesn't support extensions, the reason you have to put up with these ads.

Oh and a few years ago they banned ad blockers from the app store https://adblockplus.org/blog/adblock-plus-for-android-remove...

So every time you see a shitty horrible ad on mobile, the reason you have to watch it is 100% Google.


Extensions/add-ons were the #1 reason I switched from Chrome to Firefox on my mobile device. It supports uBlock Origin & others.

It's worth a try if you haven't done so lately.


Chrome still supports uBlock Origin, the Chrome Store doesn't.


Huh? Yes it does: https://chrome.google.com/webstore/detail/ublock-origin/cjpa...

I believe you mean Android and the Play Store, not Chrome and the Chrome Store.


Chrome on your mobile device?


Install ad block browser


Quite the opposite. If you have a site with Adsense and a good chunk of mobile trafic they recommend you to install ads for mobiles and there are options to show a banner fixed to the bottom of the screen and another to display an ad that takes all the screen. Google is really contradictory in that regard if you see what they recommend for adsense editor and what they said in the quality guidelines for search.


As if with recruiters you're not the product to be sold.


Recruiters only know what I tell them. Google knows almost everything about me.


lol, recruiters have access to google.

EDIT: To clarify, when I was a recruiter, we could build profiles on candidates, buy information on candidates, etc. It was pretty trivially easy to get all the relevant info from linkedin alone. My thought is, why not want that? It'd save me wasting a candidate's time if I could glance through his linkedin and see he/she wasn't a good fit. Because if you had a private profile, 80% of the time I'd be able to pull your phone number from a resume you'd posted somewhere (that my company paid pennies to have access to) and give you a call, just to find out real quick if you're a good fit or not.


Google doesn't tell you what they know about me, just what the rest of the internet knows and chooses to publicize about me.

Few people would hire me if they had access to my full Google profile, and I bet that's true of a lot of others.


> Few people would hire me if they had access to my full Google profile

Your profile isn't shared with normal advertisers, so I don't think there's any use of sharing it with people advertising job positions.


>Few people would hire me if they had access to my full Google profile.

Why is that? What is it that google knows about you that the rest of the world couldn't somewhat easily find out?

I'm mildly surprised, because I can't think of anything google knows about me that would prevent me from being hired. The only thing is maybe my porn history, which is all in incognito mode anyway, and really not even that shocking in terms of content.


What is it that google knows about you that the rest of the world couldn't somewhat easily find out?

Nice try :)


Haha I had considered this. I really am curious though, could you at least point out what class of data would be dangerous? I mean, maybe I'm exposing this with no idea!


Nothing special, it's not like I would get arrested, but when you have the habit of exploring, for example, political texts of all kinds, someone skimming and cherry-picking could find plenty of stuff to mark me as "inappropriate", even if they're a negligible part of the whole.

It's essentially the idea "If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him."


Ultimately we have to accept some form of centralized control to reap the benefits of this technology at scale. Would you rather be using an AltaVista job search?


How long will it be until employers can look up all the information Google has about me? More importantly, how long will it be until users will be expected to give access to their "likes, skills, and interests" for targeted recruiting? At least LinkedIn is easy enough to keep separate from the rest of my life. Google knows everything about me.


> How long will it be until employers can look up all the information Google has about me?

Why would that be necessary? Advertisers don't get any information at all about you and it works.


Linkedin is owned by Microsoft. If you're using Windows, what Google knows about you is (potentially, hypothetically, etc.) a subset of what Microsoft knows about you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: