We did something pretty similar in a nearby field of materials science, using support vector machines rather than linear regression and focusing on a less specific target:
I'm not with the lab any more, but last I heard they were soliciting data from nearby but related subfields (our lab was doing mostly vanadium templated tellurides, but there are a lot of other things to explore in the space of inorganic–organic hybrids, and there are labs working on them). The problem really is data sharing; the most active collaborators were willing to give us lab note-books as long as we paid to digitize them; it was very hard to convince other labs to digitize their own notebooks. TBH I think that's the hard part of a lot of this (collecting the data).
I know Moore's Law is basically dead but I think that advances in applications of existing computing power like this will resurrect it, or something like it, before too long. Basically, there's a less dramatic version of the AI snowball where faster computers lead to even faster computers. Whether someone manages to build strong AI on top of that is different question.
There's at least a 10x speedup potential in existing hardware if people stop doing silly things like Electron apps and shoehorning everything into web browsers.
What is your recommended tutorial/book for learning to build well-designed desktop UIs that work on Windows, OSX, and multiple different flavours of Linux?
1) documentation of any chosen cross-platform UI framework
2) any decent software book that teaches to separate the presentation layer and platform layer from the rest of the program (that way, you have much less to port between platforms and maintaining it is not as costly as people fear)
EDIT: my purpose in this line of questioning is to assert that if you are trying to persuade someone to not do a thing, you will be more effective if you can give someone a straightforward alternative.
I likewise think that the folks trying to get people to stop writing python2 should pick a release of python3 to become an LTS release in the same way that in 2014 python2.7 was effectively declared an LTS release with support until 2020.
I'm not really trying to persuade anyone; not anymore. I understand the incentives that push people toward wasteful solutions. This won't stop until we hit a resource limit. All I'm saying is that there's a lot more we can do with current hardware once Moore's law is definitely dead. The mostly untapped potential is conditioned on people not doing extremely wasteful things just to shave off a little development time.
Skia is not a GUI library for example.
CEF is not a GUI library, it's a WebView.
And there should be at least a column indicating whether the library has support for usual desktop applications and/or for multimedia full screen things (such as games, or a movie player with OSD and custom graphical design and elements, textures).
And the GUI debate is about "where are abstract libraries that compile to native apps", after all, anything else is just for prototyping a "native app". And usually people just answer but Qt is nice enough, yet everyone uses Electron :/
Of that list, what is the one that you would recommend from your experience working with it and what book or tutorial would you recommend someone work through to get a solid mental model of it?
I'm not complaining about Qt. It seems like a perfectly cromulent framework. But I have personally seen at least one application that would suggest that some people have tried to write Qt applications without first learning Qt.
Please, people, for the love of those whom your customer may hire after you, take the time to understand signals and slots before porting that godawful mess of garbage you got from your customer's previous contractor to Qt.
Why? Most of the processing can be done on the backend (in a fast language if required) and as long as you maintain 60 fps on the client everything is perfect smooth.
Given that this is doable in HTML/CSS/JS on a Nexus 5 (an almost 4 year old smartphone) it just doesn't make economic or UX sense to optimise the UI beyond the 60 fps target.
You know that modern operating systems are multiprocess, so unless your program is a videogame or something solving a computationally-expensive problem, it should minimize its resource usage.
Funny you say that, because "actually matters" is the crux of the issue here.
Does the fact that a glorified IRC client eats a big chunk of system resources matters to said program's developers? Probably not, it would seem. Does it matter to me as a user of such program? Yes, it does very much. The problem is, right now there isn't a good way for the market to signal this back to the developers so that they would care.
right now there isn't a good way for the market to signal this back to the developers so that they would care.
Switch to a competitors app?
Also when people talk about Electron sucking the one example that keeps coming up is Slack, which is making me wonder to what extent the core problem is Electron and to what extent Slack is just a badly written app.
As for switching to competitor's app, you can't do that if it's a networked app using propertiary protocol. The SaaS world has killed interoperability.
I think we'll see something different. We're now at 'peak datacenter' again and because there is only so much power and so much cooling and bandwidth that you can haul to a DC we'll see the pendulum swing back towards decentralization.
We've gone through several of those cycles and they're not in any fundamental way tied to Moore's law.
That's not really what I'm talking about. I'm just talking about new applications of datacenter computing power eventually driving research to the point where we get around some of the limitations that put the brakes on Moore's Law.
The limitations that put the brakes on Moore's Law are physical limitations there is no getting around those (at room temperature, using Silicon) in this universe at a cost level that consumers are willing to accept.
Luckily for Moore he didn't say anything about clock speeds.
He spoke of the number of transistors per area in a plane doubling every year. He didn't specify silicon. He didn't specify photolithography. He also said "at least for the next decade" in 1965.
In 1975 he revised it to every two years. In 2015 Gordon Moore himself said "I see Moore's law dying here in the next decade or so."
So let's let poor Gordon off the hook. He's being attributed things he never actually said.
No, ten years ago I wasn't so optimistic. There are a bunch of things that can and will happen to move things along incrementally but what I think, based on no real evidence, is that there will be one or more major breakthroughs at some point that will likely be computation-assisted and we will get back to something similar to Moore's Law.
I'm super curious in what area you feel these breakthroughs will be.
Tunnel effects are real and very hard to reduce, even at lower temperatures. The band-gap can't get much smaller, supply voltages are about as low as we know how to get away with.
There are solutions in terms of exotic materials with even more exotic fabrication methods.
I linked to a nice video the other day, see if that interests you:
That's the state of the art per 2012, not much has changed since then, though there has been some incremental improvement and optimization as well as larger dies for more cores.
Yes, but we've been aware of that one for decades, it just isn't going to happen for anything other than maybe (and that's a small maybe) for memory. Removing the heat is hard enough with 3D heat infrastructure and 2D chips. Removing the heat from 3D chips is not possible at a level where the interior of the chip is above permissible values unless you clock the whole thing down to where there are no gains.
> There are others that are being researched.
Yes, but nothing that looks even close to ready to mainstream.
> Then there are the unknown unknowns...
They've been 'unknown' for a long time now.
Really, this is as far as I can see more hope than reason.
Human brain is 3D and has no problems with TDP and cooling while being extremely fast at parallel processing of data like images or sound to the point that the fastest supercomputers ever built still can't compete.
Things are already going bad for anyone who believes in Moore's Law. Intel themselves have switched focus to getting more out of silicon because they don't believe in magic speed boosts for the future. Part of why they bought a FPGA company. Also why AMD's Semi-Custom business was booming. Even large customers are realizing you gotta do something other than a process shrink.
That's impressive. Co₂MnTi could be very useful. Cobalt, magnesium, and titanium are all easily obtainable. 665C Curie temperature, which is above neodymium magnets (400C) but below samarium-cobalt (720-800C). Any drawbacks to this material? Hard to compound? Low coercivity? Hard to cast or machine?
"easily obtainable" is relative. For instance, Cobalt is actually less common in the Earth's crust than either of the two rare earths we most typically associate with magnets, Neodymium and Cerium. With Cobalt demand reaching unprecedented levels due to Lithium battery demand reaching unprecedented levels, this discovery in and of itself is not a simple economic win.
Also, Magnesium is 20-25 times more common than Manganese. Just as well- production of Magnesium metal is pretty small because it's so difficult to work with.
If they were able to predict the Curie temperature, I'd think they could predict the coercivity as well. However things like brittleness would probably fall outside of their domain.
What I think we are starting to see is the beginning of widespread use of computer based discovery through essentially machine learning techniques. I think AI is pretty far off but utilizing all the computing power we have to discover new materials and create useful things isn't very far off. I wonder what will happen if we can eventually tell computers to create us a better laptop, bike, phone, or lamp.
The title of the press release is misleading, there was no non-trivial machine learning/AI involved. "Computers Create Recipe for" translates to: the researchers picked a class of materials, ran DFT simulations (the usual way to simulate this sort of thing) for all combinations of elements in that class and fabricated the ones that were predicted to have interesting properties.
The regression mentioned in the press release was only used to predict one property (the Curie temperature) of the materials based on experimental data for similar materials.
It's still a really impressive piece of work, just nothing to do with AI.
I think we need to rename AI. It's not really intelligent. It probably won't be for a long while. The machines don't know, in an existential sense, what they are looking at or making. We need to call it Artificial Insight.
They spot facts in byte streams that we don't see. We can then contextualize the info into another part of the domain, or drive it deeper.
From what I understand, we already have; that's why you usually see it called "machine learning" in research and academia - and "artificial intelligence" everywhere else.
That isn't to say there's no crossover in both directions, but usually if you are being serious on the subject, you call it ML - if you're trying to hype it or build interest, you call it AI.
Note that this only goes back so far; prior to the early 2000s or so, the terms were used more or less interchangeably. At one point, the term "machine intelligence" was used, then died out, but I've seen it used again recently.
Ask humans to tell you what "cat" means, and you'll receive as many answers as there are respondents. Some will derive from science, some from common experience; some will describe the relation of cats to their environment, others will talk about personal emotional connections with particular cats.
Ask a convolutional neural network what "cat" means, and the best you can get is a probability distribution of pixels on a grid. It's not intelligence, but just an encoding of facts provided by an actual intelligence.
No, you'll get the same kind of answer. It's not like one of the neural networks will write me a poem in response, on its own initiative. The form of the answer was decided by the human intelligence that created the neural net encoding.
The form of the human's answer was decided by the genetic code that led to the formation of the brain and the experiences the brain was exposed to up to the question. The brain is more complex by many orders of magnitude than your garden variety artificial neural network, so it is only expected that the range of possible answers is also broader.
Because they do tasks that people think require intelligence. It's like calling a water mill and a fusion reactor both devices that can generate energy.
http://www.nature.com/nature/journal/v533/n7601/abs/nature17...