It's meant to evoke the feeling of being in therapy. It started as an attempt to be an AI therapy company, and they pivoted to "github for AI" around 2019.
Dumb question, but... Is AI and ML the same thing? I ask because I am seeing a lot of my ML peers claim to be AI experts with years of experience, even though the term AI wasn't really used at all until OpenAI got big.
AI as a field has been around since the 50s and gone through multiple paradigms, but modern AI (since its resurgence around 2010) is effectively all machine learning.
What "AI" means has always been shifting; right now when most people hear it they assume it means generative AI and deep learning, but it's really a very broad category.
Early incarnations/uses of the term include what is now sometimes referred to as Good Old Fashioned AI (GOFAI), with such things as expert systems and ontological classification systems; these still technically fall under the umbrella of "AI". After GOFAI came other forms of technology, including the precursors to our current deep learning models, including much simpler and smaller neural networks. Again, these are still "AI", even if that's not what the public thinks when they hear the term.
Im not a programmer, but my understanding is that back in the '80s investors got in a frenzy when the idea of AI was big. They threw money at it and... it was decades away. Then they tried the "e" phase that became the dot com boom... and it was overhyped. This pattern has occured several times with programs.
Technology has been progressing and the definition of AI is loose at best when marketing is nearby. I have a question: what kinds of ways do people describe "unemployed" on linkedin? It is probably glossed over and hyped based on whatever marketing is difficult to verify and is at least adjacent to reality. I think this is similar to ML researchers classifying themselves as AI researchers. HOWEVER, there may be a lot more overlap than simply "adjacent", so someone please correct me if the term "machine learning" or "AI" are regulated terms for public advertisement.
Ok, I hear a lot of 'AI is a superset of ML' definitions, and yes, that's historically true, but today AI is shorthand for LLM's and others (image generation, embedding, DL-based time series). I'd draw the line at systems using emergent phenomena that aren't realistically reducible to understandable rules.
Should we say 'this uses AI' for a Prolog/expert system/A*/symbolic reasoning/planning/optimization system today? Idunno; I had scruples even about calling classical and Bayesian statistical models 'machine learning,' reserving that for models prioritizing computational properties over probabilistic interpretation.
In my experience they mean basically the same thing in practice most of the time. The industry used AI for a long time (up through the 2000s iirc) but it developed a bit of a reputation for going nowhere. Then there was a revival in the 2010s "The Social Network" funding environment when "machine learning" was preferred, I think to shake off some of the dust associated with the old name. Now the pendulum seems to be swinging back to "AI", driven by several high profile and well funded though leaders who have rewarmed the old AGI narrative thanks to some major leaps forward in models for certain applications.
Eh, a lot of it’s more down to marketing. ‘AI’ as a term has periodically come in and out of fashion; for instance, is OCR AI? Well, not 20 years ago, certainly, the term being unfashionable at the time, but now: https://learn.microsoft.com/en-us/azure/ai-services/computer...
(Computer vision, in particular, is basically always classified as AI today, but the term was mostly avoided in the industry until quite recently.)
I have read all of them and I have watched almost all of the training videos mentioned and that, is the list I would give anybody who I would want to discourage from joining the field. The textbooks I mean COME ON. They are comprehensive and neatly academic but not pragmatic at all. There are playlists in youtube that would cut through all the unnecessary content and will get you building RAG systems in a couple of hours. Kudos to the Langchain team for all the content online.
The fastest way to learn is always to go after a difficult problem you find fascinating. Learning by the books is the second fastest, and is suitable for those without the experience to have a fascinating problem to motivate them. This applies to many but not all, for example a university student might have a vague interest in AI but not know where to start. If you’re already in the field though, it’s best to just keep hacking forwards, even if you can’t prove every theorem from the major textbooks.
Bengio & Goodfellows book, while a great reference, is certainly not what I'd recommend for beginners. It's great for experienced folks IMHO, but otherwise it's quite opaque. I've used it to teach in the past and beginners really struggled with it.
Are you implying that he is adjusting his reading list to fake competence? Thomas Wolf has the second highest number of commits on HuggingFace/transformers. He is clearly competent & deeply technical
This comment seems unnecessarily personal. The difference between the reading list and Grokking Machine Learning appears to be the layer of application.
‘Grokking’s description indicates it is for a very applied role: “teaches you how to apply ML to your projects using only standard Python code and high school-level math”.
The linked reading list appears to be targeted at one level up the stack so to speak. Instead of learning how to string a few Python libraries together (which is useful, not knocking it), the goal would possibly be writing the Python library itself for a ML architecture.
Given the author’s position, it makes sense why they found the content of the latter useful in getting to that position.
Interesting to see that the book by the late David MacKay made it to the list, it is available to download in several formats from the book's website.
Information Theory, Inference, and Learning Algorithms:
https://www.inference.org.uk/mackay/itila/
The lectures based on the book by David MacKay himself:
https://youtu.be/BCiZc0n6COY
Probably it's not a surprise that the father of information theory Claude Shannon is the earliest one to propose stochastic LLM based on Markov chain.