Hacker News new | past | comments | ask | show | jobs | submit login
Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to (docs.google.com)
1656 points by flor1s on Dec 14, 2017 | hide | past | favorite | 120 comments



As someone who works with a lot of people new to machine learning, I appreciate guides like this. I especially like the early slides that help frame AI vs ML vs DL so that people can have a realistic understanding of what these technologies are for.

For my part, one of the biggest realization I had after many years of applying machine learning was that I got too caught up in the machine learning algorithms themselves. I was often way too eager to guess and check across different algorithms and parameters in search of higher accuracy. Fortunately, there are new automated tools today that can do that automatically.

However, the key piece of advice I'd give someone new to machine learning is not to get caught up in the different machine learning techniques (SVM vs random forrest vs neural network, etc). Instead (1) spend more time on translating your problem into terms a machine can understand (i.e how are you defining and generating your labels) and (2) how do you perform feature engineering so the the right variables are available for machine learning to use. Focusing on these two things helped me build more accurate models that were more likely to be deployed in the real world.

Feature engineering in particular has become a bit of a passion of mine since that realization. I currently work on an open source project called Featuretools (https://github.com/featuretools/featuretools/) that aims help people apply feature engineering to transactional or relational datasets. We just put out a tutorial on building models to predict what product a customer will buy next, which is a good hands on example to learn from https://github.com/featuretools/predict_next_purchase for beginners.


Don't you think people are, sometimes, just applying ML to their problem "because of hype" ?

One example I have in mind, was a contest where participants were given a series of satellite pictures and asked to write a classifier to detect icebergs and cargo ships (the two are quite similar). As someone else pointed out, trying to use classical computer vision and machine learning on these images will always have some error rate during identification. However, if we were able to extract speed and trajectory of all objects in the picture and mixing them with AIS data, finding which ones are ships, which ones are giant pieces of ice, and which one are non-moving structures to be avoided, becomes easy.

So, you have to choose between a black box that will give you potential results with a given error-rate, and a predictable algorithm that anyone can audit. Seems like a no-brainer situation to me. For what other reason would you choose the first solution, except hype-related decisions ?


Your comparison seems like a false dichotomy, and I think you are agreeing with OP. OP says, spend less time worrying about the algorithm and more time worrying about what data you are feeding the algorithm. You are saying, what if you had to choose between dataset A with algorithm A and dataset B with algorithm B.

You claim, (probably correctly) that dataset B, which includes velocity and trajectory, is more correct for the problem at hand, and given dataset B, I would suggest that either algorithm A or B would probably do just fine.

You also claim that algorithm A has "some error rate during identification." But so will algorithm B, and so will either algorithm on dataset A and B!

The question you should ask is, how much do I care about "black box" vs. "white box", and is there are trade-off? If the black-box solution (algorithm A, the "ML" solution) gives you 10% higher accuracy, and that accuracy is going to save lives, you bet I'd choose it. Or maybe I decide that interpretability is really important due to external audit reasons, so I need the white-box solution. But maybe I'd choose both, the interpretable one, and use the uninterpretable one as a flag for "a human should look at this." Or maybe I'd combine the results of both algorithms to get even higher accuracy.

There are just so many ways to configure a solution to the problem you propose, and you are only distinguishing between two of them. In the end the appropriate choice depends on context.


The thing is if you know exactly what you are looking for, like in your example, or a QR code, or a barcode, it makes sense to tailor an algorithm. But you may not want to have to maintain a complex algorithm every time a small change happens (say new kind of ships appear). Or you might want a generic approach (recognise any objects, including objects that did not even exist at the time the code is written, but will appear in the data). In such case I can see ML being a good choice.


You wouldn't, and any data scientist worth their salt would recommend that the business choose the latter option.


> I especially like the early slides that help frame AI vs ML vs DL so that people can have a realistic understanding of what these technologies are for.

But they're wrong! I read "Deep learning drives machine learning which drives artificial intelligence." This is very wrong. I stopped reading.


How is it wrong? What's the correct hierarchy ?


AI is the overall field and the superset of all of the various approaches.

ML is one family of approaches for knowledge acquisition in AI, but far from the only one (eg. logic based inference is another big one).

DL is a family of approaches in supervised ML. As the author points out, it's a subset of a subset.

But saying that this sub-subset "drives" AI is like saying endocrinology "drives" medicine: not the right mental model at all.


If endocrinology was the most talked-about, hyped, and invested-in form of medicine I think it would be fair. DL is where an enormous amount of AI growth and progress is.


hyped things are made hype from a belief that it can be 'done' therefore invested in, but that can be done in one part of ML, which doesn't give any reason to say it generates all intelligence for a machine for human tasks.


My understanding is that Deep Learning is a type of Machine Learning. Artificial Intelligence is the idea that a machine performs similarly or better than a human for a specific task. Artificial General Intelligence is when a machine performs similarly to a human in many different kinds of tasks.


Really thanks for this. I've recently dived into ML & DL and have slowly but surely realized the importance of feature engineering(FE). Though I've taken a few MOOCs, I haven't found one that truly focuses on FE and still looking.


For sure, usually the algorithms aren't the interesting part, but rather how you frame the problem and most importantly what data you have.

I wish I could say I was passionate about feature engineering. I enjoy where deep learning is heading right now - where that kind of finicky, more-art-than-science approach becomes unnecessary, and the model does a better job detecting features than humans.


What are best resources for "defining and generating" labels? Any recommendations?


I don't know of a definitive public resource for this. I published a paper in IEEE's Data Science and Advanced Analytics conference on it back in 2016. You can find that here: https://dai.lids.mit.edu/wp-content/uploads/2017/10/Pred_eng...

Additionally, my company (link in profile) builds a commercial product to help people define and iterate on prediction problems in a structured way based off of the ideas in that paper.


You need a good random sample and lots of manpower to manually label them. Mechanical Turk [0] is one place to go for that man power if you don't have a "grunt work" team and are not willing to spend a few days doing it yourself.

There are also some methodologies out there that can help you label data sets more efficiently. I don't often see them used, but they exist. Look up "active learning" and "semi-supervised learning".

[0]: http://nlp.cs.illinois.edu/HockenmaierGroup/Papers/AMT2010/W...


There were some folks in this area targeting generating annotation tools of various kinds. I believe the focus was on something reducible to a web interface but if you can label things going that route check it out https://en.wikipedia.org/wiki/CrowdFlower


This tool is pretty interesting.

I've been playing around with a similar idea of text. Do you already do that?


> Fortunately, there are new automated tools today that can do that automatically.

can you please elaborate?


One very old tool for such things was called "stepwise regression". IIRC J. Tukey was partially involved in that. It appears that the AI/ML work is close to the regression and curve fitting going back strongly to the early days of computers in the 1960s and a lot in the social sciences back to the 1940s and even about 1900.

A lot is known. E.g., there's the now classic Draper and Smith, Applied Regression Analysis. Software IBM Scientific Subroutine Package (SSP), SPSS (Statistical Package for the Social Sciences), SAS (Statistical Analysis System), etc. does the arithmetic for texts such as Draper and Smith. For some decades some of the best users of such applied math were the empirical macro economic model builders. E.g., once at a hearing in Congress I heard a guy, IIRC, Adams talking about that.

Lesson: If are going to do curve fitting for model building, then a lot is known. Maybe what is new is working with millions of independent variables and trillions of bytes of data. But it stands to reason that there will also be problems with 1, 2, 1 dozen, 2 dozen variables and some thousands or millions of bytes of data, and people have been doing a lot of work like that for over half a century. Sometimes they did good work. If want to do model building on that more modest and common scale, my guess is that should look mostly at the old very well done work. Here is just a really short sampling of some of that old work:

Stephen E. Fienberg, The Analysis of Cross-Classified Data, ISBN 0-262-06063-9, MIT Press, Cambridge, Massachusetts, 1979.

Yvonne M. M. Bishop, Stephen E. Fienberg, Paul W. Holland, Discrete Multivariate Analysis: Theory and Practice, ISBN 0-262-52040-0, MIT Press, Cambridge, Massachusetts, 1979.

Shelby J. Haberman, Analysis of Qualitative Data, Volume 1, Introductory Topics, ISBN 0-12-312501-4, Academic-Press, 1978.

Shelby J. Haberman, Analysis of Qualitative Data, Volume 2, New Developments, ISBN 0-12-312502-2, Academic-Press, 1979.

Henry Scheffe, Analysis of Variance, John Wiley and Sons, New York, 1967.

C. Radhakrishna Rao, Linear Statistical Inference and Its Applications: Second Edition, ISBN 0-471-70823-2, John Wiley and Sons, New York, 1967.

N. R. Draper and H. Smith, Applied Regression Analysis, John Wiley and Sons, New York, 1968.

Leo Breiman, Jerome H. Friedman, Richard A. Olshen, Charles J. Stone, Classification and Regression Trees, ISBN 0-534-98054-6, Wadsworth & Brooks/Cole, Pacific Grove, California, 1984.

There is a lesson about curve fitting: There was the ancient Greek Ptolemy who took data on the motions of the planets and fitted circles and circles inside circles, etc. and supposedly, except for some use of Kelly's Variable Constant and Finkel's Fudge Factor, got good fits. The problem, his circles had next to nothing to do with planetary motion; instead, that's based on ellipses and that was from more observations, Kepler, and Newton. Lesson: Empirical curve fitting is not the only approach.

Actually the more mathematical statistics texts, e.g, the ones with theorems and proofs, say, "We KNOW that our system is linear and has just these variables and we KNOW about the statistical properties of our data, e.g., Gaussian errors, independent and identically distributed, and ALL we want to do is just get some good estimates of the coefficients with confidence intervals and t-tests and confidence intervals on predicted values. Then, can go through all that statistics and see how to do that. But notice the assumptions at the beginning: We KNOW the system is linear, etc. and are ONLY trying to estimate the coefficients that we KNOW exist. That's long been a bit distant from practice and is apparently still farther from current ML practice.

Okay, ML for image processing. Okay. I am unsure about how much image processing there is to do where there is enough good data for the ML techniques to do well.

Generally there is much, much more to what can be done with applied math, applied probability, and statistics than curve fitting. My view is that the real opportunities are in this much larger area and not in the recent comparatively small area of ML.

E.g., my startup has some original work in applied probability. Some of that work does some things some people in statistics said could not be done. No, it's doable: But it's not in the books. What is in the books is asking too much from my data. So, the books are trying for too much, and with my data that's impossible. But I'm asking for less than is in the books, and that is possible and from my data. I can't go into details in public, but my lesson is this:

There a lot in applied math and applications that is really powerful and not currently popular, canned, etc.


Stepwise regression is not to be recommended because it's very easy to fool oneself.

http://www.sascommunity.org/mwiki/images/e/e2/NYASUG-2007-Ju...

http://www.barryquinn.com/the-statistical-dangerous-of-stepw...

Shrinkage methods like lasso/elasticnet are less susceptible to these problems.


I agree fully. No doubt Breiman took some steps beyond what Tukey did. My only point was that the question and some answers are old.


Thank you for the list of resources.

Are you able to go into more detail about your startup (problems it is solving)?


Okay, here, just for you. Don't tell anyone!

My view is that currently there is a lot of content on the Internet and the total is growing quickly. So, there is a need -- people finding what content they will like for each of their interests.

My view is that current means for this need do well on (rough ballpark guesstimate) about 1/3rd of the content, searches people want to do, and results they want to find. My work is for the "safe for work" parts of the other 2/3rds.

The user interface is really simple; the user experience should be fun, engaging, and rewarding. The user interface, data used, etc. are all very different from anything else I know about.

The crucial, enabling core of the work, the "how to do that", the "secret sauce", is some applied math I derived. It's fair to say that I used some advanced pure math prerequisites.

To the users, my solution is just a Web site. I wrote the code in Microsoft's Visual Basic .NET 4.0 using ASP.NET for the Web pages and ADO.NET for the use of SQL Server.

The monetization is just from ads, at first with relatively good user demographics and later with my own ad targeting math.

The Web pages are elementary HTML and CSS. I wrote no JavaScript, but Microsoft's ASP.NET wrote a little for me, maybe for some cursor positioning or some such.

The Web pages should look fine on anything from a smart phone to a high end work station. The pages should be usable in a window as narrow as 300 pixels. For smaller screens, the pages have both horizontal and vertical scroll bars. The layout is simple, just from HTML tables and with no DIV elements. The fonts are comparatively large. The contrast is high. There are no icons, pull-downs, pop-ups, roll-overs, overlays, etc. Only simple HTML links and controls are used.

Users don't log in. There is no use of cookies. Users are essentially anonymous and have some of the best privacy. For the user to enable JavaScript in their Web browser is optional; the site works fine without JavaScript -- without JavaScript maybe sometimes users will have to use their pointing device to position the cursor.

There is some code for off-line "batch" processing of some of the data. The code for the on-line work is about 24,000 programming language statements in about 100,000 lines of typing. I typed in all the code with just my favorite text editor KEdit.

There is a little C code, and otherwise all the code is in Microsoft's Visual Basic .NET. This is not the old Visual Basic 6 or some such (which I never used) and, instead, is the newer Visual Basic part of .NET. This newer version appears to be just a particular flavor of syntactic sugar and otherwise as good a way as any to use the .NET classes and the common language runtime (CLR), that is, essentially equivalent to C#.

The code appears to run as intended. The code should have more testing, but so far I know of no bugs. I intend alpha testing soon and then a lot of beta testing announced on Hacker News, AVC.COM, and Twitter.

For the server farm architecture, there is a Web server, a Web session state server, SQL Server, and two servers for the core applied math and search.

I wrote the session state server using just TCP/IP socket communications sending and receiving byte arrays containing serialized object instances. The core work of the Web session state server is from two instances of a standard Microsoft .NET collection class, hopefully based on AVL or red-black balanced binary trees or something equally good.

The Web servers do not have user affinity: That is, when a user does an HTTP POST back to the server farm, any of many parallel Web servers can receive and process the POST. So, the Web servers are easily scalable. IIRC, Cisco has a box that will do load leveling of such parallel Web servers. Of course, with the Windows software stack, the Web servers use Microsoft's Internet Information Server (IIS). Then IIS starts and runs my Visual Basic .NET code.

Of course the reason for this lack of user affinity and easy scalability is the session state server I wrote. For easy scalability, it would be easy to run hundreds of such servers in parallel.

I have a few code changes in mind. One of them is to replace the Windows facilities for system logs with my own log server. For that, I'll just start with my code for the session state server and essentially just replace the use of the collection class instances with a simple file write statement.

I wrote no prototype code. I wrote no code intended as only for a "minimum viable product". So far I see no need to refactor the code.

The code is awash in internal comments. For more comments, some long and deep, external to the code, often there are tree names in the code to the external comments, and then one keystroke with my favorite editor displays the external comments. I have about 6000 files of Windows documentation, mostly from MSDN, and most of the tree names in the comments are to the HTML files of that documentation.

I have a little macro that inserts time-date stamp comments in the code, e.g.,

Modified at 23:19:07 on Thursday, December 14th, 2017.

and I have some simple editor macros that let those comment lines serve as keys in cross references. That helps.

The code I have is intended for production up to maybe 20 users a second.

For another factor of 10 or 20, there will have to be some tweaks in some parts of the code for more scaling, but some of that scaling functionality is in the code now.

For some of the data, a solid state drive (SSD), written maybe once a week and otherwise essentially read-only, many thousands of times a day, would do wonders for users served per second. Several of the recent 14 TB SSDs could be the core hardware for a significant business.

Current work is sad -- system management mud wrestling with apparently an unstable motherboard. At some effort, I finally got what appears to be a good backup of all the files to an external hard disk with a USB interface. So, that work is safe.

Now I'm about to plug together another computer for the rest of the development, gathering data, etc.

I'm thinking of a last generation approach, AMD FX series processor, DDR3 ECC main memory, SATA hard disks, USB ports for DVD, etc., Windows 7 Professional 64 bit, Windows Server, IIS, and SQL Server.


I want my next side project to use machine learning. Not sure what to do though. Thanks for posting your project. I'll be checking it out. Feature engineering sounds interesting.


Deep Learning frees you from the need to do "feature engineering" and usually works much better than methods which require such process. I'd instead recommend everyone to "dive deep" into deep learning and once they master it, get acquainted with classical methods that still might get used here and there. I understand it's difficult to let go of what you worked very hard to understand when you were studying ML, but such is life, "sunk cost fallacy" should not you blind from seeing 95% success rate of DL while observing paltry 60% success rate with SVM/HMM on the same problem. Just let it go.


This might be true for CV and speech recognition and synthesis, but there are huge categories of problems (dare I say, the majority of industry use of ML) that are either working with time series data (which DL hasn’t had great success with) or must be highly explainable and tunable.


Or you don't have millions of annotated examples to learn from, and no similar problem to transfer from...


I both make most of my money from time series data and use deep learning and work with data with no labels. Here's a recent presentation I did on some of this work and a companion presentation I encourage people to read on how to use this effectively in production.

While you are right that some feature engineering is needed, there's no reason DL can't be a part of your workflow.

https://www.slideshare.net/agibsonccc/anomaly-detection-and-...

https://www.slideshare.net/pacoid/humanintheloop-a-design-pa...

For more of the basics, my book on deep learning might help as well (minimal math vs the standard text book):

http://shop.oreilly.com/product/0636920035343.do


> While you are right that some feature engineering is needed, there's no reason DL can't be a part of your workflow.

I (and, I believe, the earlier poster, too) never implied you can't use deep learning on such examples. What we (I think, both) were referring to was the claim that it would absolve you from feature engineering. (Which I understand you also refute.)

> For more of the basics, my book on deep learning might help as well

Congratulations on your book, I know how much hard work that is!

Disclaimer: I make money with deep learning, too... ;-)


A lot of people make money with deep learning with images ;).

I guess what I wanted to do was add a bit of nuance. It can help reduce the amount of feature engineering needed. Of course you still need a baseline representation though. More feature engineering also doesn't hurt. I always think of deep learning in the time series context as a neat SVM kernel with some compression built in. With the right tuning it can give you a better representation which you can use with clustering and whatever else you'd like.


I work with language, not images. There, clever feature engineering isn't just better, it's essential to get anything that is production worthy. In fact, it will even be embedded in some expert system process if your system needs to understand very complex relationships. AI around the corner my ass... :-)


Agreed :). Workflow matters a lot more than the hype Sandhill road and google's marketing team are perpetuating. Good on you for making it work in the real world for something outside of vision/speech!


Thanks for the info! The book looks interesting.

Do you have an opinion on the fast.ai and deeplearning.ai courses? I finally have some time to work through these and since the deeplearning.ai series starts on December 18th, I'm wondering which one to dive into since I can't tell from the outside how they compare.


While I agree with others that more is better, if you can take only one course, I strongly recommend taking Andrew Ng's. While it is true that you don't need to be able to design and understand 'nets from scratch to be able to use them, I agree with most of the brightest minds in DL that you won't get too far if you don't at least have an intuition for the math behind it. And Ng's course really only gives you that - an intuition. It does an excellent job at ensuring participants understand the bare minimum to do any kind of serious work. Learning ${your favorite framework}'s API will be a breeze if you understand the "why" already.


I would take both. deeplearning.ai focuses more on math fundamentals, fast.ai takes a more coding oriented approach. It also has 2 classes: a beginner and advanced one. I personally prefer the fast.ai approach.


Add Udacity's DLF ND to the mix and do all 3 of them, they are all a bit different. Udacity's one has the inventor of GANs doing lectures there, so it's pretty top notch as well.


The presentation goes straight from from linear regression and classification to computer vision and reinforcement learning.

The practical value of ML/AI is what’s in between and is something that isn’t often discussed between all the hype. ML/AI can be used to build models which work well with nontabular data (e.g. text and images), and can solve such regression/classification problems more cleanly. (and with tools like Keras, they’re as easy to train and deploy as a normal model)


I think slide 12 touches on this. Even in the case of an image we can process it pixel by pixel, but that would be lunacy!

For text great results have been achieved using automatons, but they only work for structured strings and break if you add only a little bit of noise.

I feel like ML should be considered whenever you feel like programming something requires you to deal with many different cases, you have a lot of example data available, and having some false positives / true negatives is not a big problem.


Sorry, where do you see this? I see a lot of slides devoted to the "in-between" of random forests, perceptrons, etc. The jump from supervised to unsupervised to RL also makes sense, since RL is a different learning paradigm from the other two.

I'm as exhausted of the ML hype as anyone else, but I believe this deck tempers expectations.


If I understand correctly those are slides from a Googler (Not sure if those slides have corporate approval), that probably have as a side goal to showcase that Google is a fun place to do ML.

Not that I am judging or anything but, the author's personal website http://www.jasonmayes.com/ whose link is displayed multiple times is a giant ad to get hired elsewhere and show at least some desire for other career opportunities. Not sure if that reflects greatly on the company.


Checking his website, it reeks of narcissism. There are better ways to assert yourself than to do all the corny things he has done on his self promotion website.


Are you honestly slagging a guy off for talking about himself on his resume???

I mean yeah, we computer folk are supposed to be all self deprecating and all. But if there is one place we should stop mumbling and talking ourselves down for a second, that is it.

At some point if you want people to know what you do, you're going to have to tell them.


I found his approach tacky, loud and insincere.

Of course you should be talking about yourself on your resume but a couple of this that are different here:

- Wtf is up with music - 51%/49% thing. - Publicly asking to be hired that reflects poorly on his current job at Google. - Excessively loud self marketing

why not have a simple site with your accomplishments? Why all the excess bullshit?


I think that's definitely a cultural thing. It would be in poor taste in my country, but seems very American.

And many of the things he did do sound like he could be a good contributor - I guess you can't know about his personality without an interview.


I sorta draw the line at autoplaying music. Apart from that, he's done a good job. How many of us are bold enough to put long list of glowing reviews on our resume?


Wow, that site is extremely weird and off-putting. Scroll down to the section "What are people saying about me?" to read what I can only assume is his friends being asked to write promotional blurbs about him. I can understand putting your best foot forward on your résumé, but this is something else.


This is a deeply unfair, unreasonable and arguably abusive comment.

It's entirely reasonable to talk about yourself and your achievements on your resume, and Mr Mayes' site is rather a good example of doing so.


No one is talking about resume. I’m talking about his approach.

If I see this kind of resume land or my desk, it gets thrown out.


Given the number of plugs for Google products/projects/research (especially near the end) it's probably intended to be more of an ad for Google.


Maybe. OTOH, this guy's resume says he's a web programmer. I'd think if google were recruiting people interested in machine learning, they'd get one of their machine learning specialists to write this.


Specialists are usually the worst teachers, because they assume that you know trivial things. What appears trivial to them is not trivial to the audience though.

https://en.wikipedia.org/wiki/Curse_of_knowledge


Slide 64: A whole tonne of stuff going on in robotics right now. Just take a look at Boston Dynamics YT channel for some mind bloding research, most of which is driven by ML..

I highly doubt that BD is doing any ML work right now ... Can the author link to specific research that they are doing using ML?


You mean public work perhaps? I imagine they are doing a lot with vision, gait learning, object manipulation, task planning, autonomy, multi-robot coordination, etc. all of which can be enabled by or at least helped along by machine learning, no? Your request for links is valid, I just am surprised anyone would doubt that they are doing ML research unless you are thinking of a strangely narrow definition of ML.


Most of robotics is about reinforcement learning.

EDIT: Oh, and expert systems/rules. Lots of em.

EDIT2: Well, an engineering, obviously... :-) Heck, just check Wikipedia on the topic...


>Most of robotics is about reinforcement learning.

is != always will be.

But forget that, let's check Wikipedia, as you suggest.

What are the first few words of the article on Reinforcement Learning, hmm. The very first few words at the very beginning of the article:

"Reinforcement learning (RL) is an area of machine learning..."

Read that and tell me what the last two words are. "Machine learning."


As I remember, they don't use any deep learning ML. I think their stuff is based on something about funnels.


For those interested, I saw the funnel stuff here: https://www.youtube.com/watch?v=7enj1FGoYwg&feature=youtu.be...


A bit extreme, but yes, saying that robotics is driven my ML is ... weird.


Good slides, got me back in to the fever of wanting to learn; although a lot of the credit goes to the linked 3Blue1Brown videos (whose Linear Calculus series is excellent) which were a lot more technical but no less approachable.

Question to those versed in ML: I want to work on an AI that plays a video game (aspirations of playing something like Rocket League, but I know I need to start smaller with something like an old NES game). I understand these are usually done with Recurrent Neural Networks, but I'm a little lost as to how to get data in to the RNN -- will I need to make another AI or CNN to read the screen and interpret (including the score?) My 30k ft view is that if I can define a 'score', give it a 'reset' button, and define 'inputs (decision targets)', then I just need to give it the screen and let it do its thing. But getting the 'score' is the part I can't figure out short of adding another layer to the classifier.


You should check out Berkeley's deep reinforcement learning course[1]. There's lecture videos, slides, and homework assignments, and it's all very up-to-date.

[1] http://rll.berkeley.edu/deeprlcourse/



The document is awesome, but the animated backgrounds are distracting.


exactly, I stopped at the second slide because of that. "I ask for your undivided attention for two hours" is what it says, the background animation seems not helping that goal, quite the opposite.


Agreed, I couldn't get past the first few slides without getting annoyed by the animated backgrounds and closing the presentation out.


Completely agree. The movement in the animations keeps grabbing my attention. The first slide took multiple attempts to read without getting distracted. Maybe my attention span is just bad, but I really want to understand the slides! :(


I tried to edit it to prevent animation and apparently I can't edit or copy or download or export. Gotta try printing next.


Reading normally, and skipping the videos, the whole deck takes about 15 minutes. The last 3rd of the slides are basically promotional material for the various Cloud ML services that are out there.

It's nice deck, but I'd hoped the blue slides went more technical without dropping out to various videos. If wanted videos, I'd go to youtube directly. Not everyone wants to learn through watching people talk. I learn best when I read, it's unfortunate that youngsters these days think that the written word is now a poor cousin to flashy video.

<rant>

In the same way that new clothes are no longer for me, and new music is no longer for me, and all good TV shows and films are full of people half my age, I also now feel that I'm being aged off the internet.

I was here first, you young whippersnappers! It's MY lawn.

</rant>


Cool presentation ... but there's a million ones like this. We don't need yet another basic introduction to machine learning, we need detailed practical studies of real problems.


Easier to curse the darkness than to light a match.



Loading...

Google Slides is really slow. That's why this needs two hours.

Most of the real content is in linked videos.


"There is no golden road to geometry"

If you really want to understand you would be much better off starting here:

https://work.caltech.edu/telecourse.html


I was absolutely convinced by the title that this would be a link to a research blog post about an analysis of hair motion at metal shows.


Me too; After I read Perceptron I fast forward lol


Yeah like how you should do it if you don't want to make damage to your neck.


This ends up not being much more than an advertisement for google. Wikipedia's articles on these subjects have more depth.


Information is great, but it would be much more readable in simple text form or pdf. It's strange that senior creative engineer at Google doesn't know presentation making basics.


It's not surprising that a Google engineer would use Google docs. It's at least easily shareable and there are complementary embedded videos that aren't suitable for text/PDF anyway.

Though, the options to export as a PDF didn't work for me (either via download or as an export to Google Drive). I'm assuming the presentation is too big.


The document has export feature disabled, but it doesn't change the viewer UI and there's no error message. I wouldn't mind google docs if it wasn't for the heavy animations ...


Same here, just tried to download the PDF multiple times but it always failed (didn't even start; tested Chrome and Firefox)...


It has been explicitly set to be non-downloadable or exportable.


Can't download the PDF as well.


This might be intended behavior. If you try to print the deck, it shows the Goole Drive access request page.


strobe morphing advertisements that are trained to adapt to your unique reaction(s) until it detects the same facial response as the last time you made a purchase. botox, masks, camouflage tattoos, permanent smiling, IR obscurants. the arms race begins a new. benevolent dictatorship by self regulating machines is probably going to plague us for several thousand years. wouldn't that be interesting, if the last evolutionary bottleneck is getting smart enough to create an enclosed planetary system that fully satisfies your biological needs and suppresses all intangible ones so there is no reason to go any further up the chain of exploration.

stopping death means stopping life, every generation is more forgetful of the past than the last. the light ages will be far more destructive, what could possibly motivate you to stop a perpetual pleasure machine? how do you prevent the inevitable conflict between those who insist pain and suffering is an essential part of the human experiment and those who just force them to feel good and change their mind? what will happen to these toddlers in 10-15 years? they will have grown up interfacing with some electronic device for every single day of their young lives, a different type of consciousness shaped by destruction of self-confidence in their own knowledge and memories and a complete trust in the needle-finders of big hay.

this shift will be as important as pre-writing to post-writing, except the transformation won't take centuries and millennia to propagate itself across the planet. a post-memory world, with every human enslaved by their base sensations. the first US president who is an internet addict.

"[Writing] will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality."

the future is a fate worse than death.


That's a strongly pessimistic view you got here. Are you aware that there's people thinking the opposite of you? The thing is: the reality is always far more balanced that what the extremists are preaching us.


is being disagreeable sufficient claim to argument?[0] is there a profit motive in balance?[1] people with vast weapons of control, deception and war shape reality, is it balanced?[2]

[0][1][2] no


I'm sorry, I just don't see what we are argumenting about, to be honest.


Dude look at the way he expressed his thoughts. Either a troll or a loon. Best not to engage him either way.


i have seen a few of your comments and i would like to talk to you further. can you contact me? my email is my username @gmail


Nice introduction, but I really don't see how "2 years of headbanging, so you don't have to" applies.


I think the author meant "banging my head against the wall" while getting it all working but didn't realize the term has a very different meaning...


There is no quick way to get a good grasp on ML. You just need to spent the time needed to get there, reading and working on simple problems, carefully validating that you understand concepts as you go. It's like asking for a way to learn mathematics or computer programming in an hour. Hint: there isn't.


I created nearly the same presentation this week. It's good to see that I didn't miss much, though this one goes deeper, which I don't do on purpose. I'll probably send the attendee this presentation afterwards for those who want to go deeper, very cool!


Pretty similar information to my semester-long Machine Learning class in undergrad (with less detail). Good information to get the basics down, but can't say I've gone to apply any of the information I have learned yet... Still a useful set of slides.


Not bad but toward the end it basically just becomes a big pitch for Google's ML products. It links to 3Blue1Brown's videos which are great!


I really appreciate you sharing this, as myself always try to create a learning path and make things clearer as much as possible on top of it, slides are good looking (design awesome). the problem sometimes comes is at its overwhelming when advanced things pop up in talk and its hard to know what should we listen to it or is it above our level of the current situation.


I honestly don't really see the value in a slidedeck, without the accompanying talk. It's the same as when someone proclaim: "Slides from this talk is available online", yeah that's not really any good without video or audio.


Disagree. Slides, even out of context slides, can be an excellent source of information for people new to a field: they still give a sense of the structure of the talk (what is related to what), and they give you keywords to start searching for. And for experienced readers, sometimes they just contain nice ideas or tips you had not been aware of.


No they aren't. Slides are a skeleton. Just bullet points that 99% of the time offer no usable information


Idea: Neural network generated infinite sidedeck which appears to follow a coherent narrative, which is ever so slightly out of grasp.


How can I download this slide?


Going to the /export/pdf link shows access denied. My quick 3min workaround was to use FF dev tools, settings enable screenshot, click through and save each page.


96 x 2 clicks too

such modern


Just use the download button under the gear icon. /s

Unfortunately it doesn't work I guess.

Oh, and he says he is watching you... Maybe he really means this? Maybe that's why he disabled downloads?


I clicked the gear I clicked the PDF/ PPTX nothing seems to be happening.. I think, I just became "Person of Interest"


Yep same here.


This make it look like all interesting ML projects are by Google.


Probability distributions are approximate world models. There are mathematical tools to management them. Once you see the world this way, ML/DL becomes more intuitive.


>Artificial neuron (aka perceptron)

These aren't synonymous, a perceptron is a type of artificial neuron.

Also confusingly, 'multi layer perceptrons' might not contain perceptrons at all.


As someone who has recently taken an interest in machine learning and AI, this deck is much appreciated.


Thank you for this. This is an excellent slide deck


Why is the file protected.. how to access it


What was he headbanging about in these last 2 years? Just a linkbait-y title?


I really wish that they hadn't decided on having moving images behind the text you are supposed to be reading.


Give us your undivided attention.

Does everything to distract us.


Agreed - but that's only a small portion in the beginning. The bulk of it is fine.


> 2 years of headbanging

Shame the backgrounds gave me a headache anyway


The slide titled “A note on dimensionality” reminded me of this xkcd: https://www.xkcd.com/547/

“That would be (very) bad.”


Gold!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: