"When you log in to Facebook, we use the power of machine learning to provide you with unique, personalized experiences. Machine learning models are part of ranking and personalizing News Feed stories, filtering out offensive content, highlighting trending topics, ranking search results, and much more."
So, everything that i hate about FB is related to the AI.
Ranking and Personalizing: Horrible ranking results and the "personalized" results that they push into my feed have nothing to do with me, my interests and is not something i would ever want in my social feed. Its almost as bad as their advertising results (which im guessing is based on the same AI).
Offensive Content: I could not give 2 f@@@@ about this, but i guess some people don't like strong language or pictures (only point that i see as acceptable use of the AI for now).
Trending Topics: FB, you are not Twitter, please stop trying to copy them. If i want trending topics I'll go to Twitter. My Facebook is a way to contact people i have not talked to in months.
Search results etc: What search??? Facebook as a search engine now? Or are you talking about the BS do you know John? You should know john! - FB: If I have not added John, it might mean i dont want to talk to John cause he is an a@@. Until your AI learns to detect aholes then please switch off this crap.
I know its a rant, but Im so tired of services trying to fill my feeds with crap, and also the crap BS about AI saving / ending the world. It might create nice cool articles for the news, and buzzwords in the bubble out west. We can only hope that Winter will not come this time, and we can advance far enough that AI becomes an actual useful product for more then spam filters, and detecting if John went to the same school as me.
* John is an invented person based on people i know.
* Swearing is blocked as I'm not sure what people prefer on here.
* Yes, I know AI is advancing a lot, and I'm a big supporter. I just don't like how its used, and how the PR makes it look like its the 7th wonder of the world.
You are probably a person who has been online for a long time, and seen a lot. You have expectations that are not similar to the majority of FB users. They are mostly young people, or poor, or from poor countries who only have a cheap Android phone, no laptop, don't know what Twitter is and they discover Facebook for the first time. They find uses for it without having any expectations (like, "If i want trending topics I'll go to Twitter").
You probably use FB for one hour per month because it's the most convenient way to catch up with your friends and family and they spend 4 hours per day because for them the internet is Facebook. FB uses statistics to become better for them, not for you.
I must be in his same bucket. All I see our either Hillary Clinton or Donald Trump posts. It all seems very polarized to me. Back when I first got on the internet, it was through dial-up modems. I would browse around on gopher and everyone chatted on IRC. It felt more like a community. I think there are still pockets of that out there, its just buried in all the noise.
Exactly! I am sick of seeing posts related to only Hillary/Trump even when they were not the presumptive leading candidates. Also missed some important updates of friends because FB thinks it's not relevant to me.
Have you tried hiding the Hillary/Trump posts and interacting with the friends whose updates you want to see?
Or you could just install Social Fixer, and control your feed that way. However, in my experience, it's quite hard to produce a better feed than Facebook's AI...
All the major publications i follow do this. FB doesn't have to hide posts by keyword. The extension sounds good, but as you know, a lot of users including me used it mostly on mobile.
You are probably a person who has been online for a long time, and seen a lot. You have expectations that are not similar to the majority of FB users. They are mostly young people, or poor, or from poor countries who only have a cheap Android phone, no laptop, don't know what Twitter is and they discover Facebook for the first time.
This argument seems remarkably similar to the sort of argument once used to justify the dictatorial rule of colonial strongmen. "You don't understand these people, they don't value the thing you value, for them a sense of security is much more important than choice." etc.
Which is to say that its pretty patronizing assumption.
I use FB for hours a day, Facebook is probably my main Internet outlet. I think it's problematic that FB substitutes their version of how they expect you to filter things for giving their users control over how their news feed is filtered. Lots of "unsophisticated" users complain about this and there are even cargo-cult methods used by the unsophisticated to get control over their feed (such the "disclaimer post" purporting to give the average Facebook user some extra legal rights over their posts).
Perhaps you add more data to your profile. These AI algorithms are only as good as you train them. Like things you like, unlike things you don't like, add demographic and interests. It will get better with use.
While FB has released some excellent open source software in the past, I don't really find blog posts about closed source software to be that interesting.
The optimist in me says that the reason this hasn't been open sourced is because a lot of the distributed code is tied specifically to FB's infrastructure - but I guess we'll wait and see.
I'm in the middle on this one. I totally agree it sucks that this isn't something we can use and it is being used to manipulate our emotions. On the other hand, it is a good road map that others could possibly recreate. Having created an AI system that any of the engineers in the company can use (and build off of each other) not just the data science team is pretty cool.
Some of my most popular blog posts have been explanations as to how closed source video game engines work. Some of the most popular conference talks are on closed source solutions to encountered problems.
Dismissing a blog post simply because it describes closed source software is silly.
Yeah although we might not get to see exactly what's under the hood, key information and solutions shared by an insightful post can be used and adapted by the community anyway
There's a key statement here that most will either miss or never reach. The second last paragraph
> Machine Learning Automation
Tuning models is a pain, and if you don't understand the model and all its parameters (like most software engineers who's job is the write software and not build models!) it takes time and can be immensely frustrating. Randal Olson[1] just announced TPOT[2], a Python tool that "automatically creates and optimized machine learning pipelines using genetic programming". This is going to be a huge lever for engineers wanting to experiment/implement with ML algorithms.
Yes, but does it work? I cannot tell from reading the introductory materials whether it will actually search efficiently over large space (where it is computationally expensive to evaluate the fitness of any individual point).
I know that folks are having some success with Gaussian process optimisation for hyper parameter tuning (eg https://github.com/Yelp/MOE), but I would be sceptical about the application of genetic programming to this area, particularly as broadly defined a search space as TPOT seems to set up (genetic programming either doesn't make assumptions about the objective function being optimised which can be used to search more efficiently, like Gaussian optimisation does - or uses implicit assumptions that may be unsuitable, depending on how you want to think about it); has anyone seen benchmarks against other search methods? The workflow does look great though - integrating it into scikit looks really clever.
I agree and I think it's clear TPOT and similar tools, are the first generation. Genetic algorithms might be slow/costly but the concept is there with an integrated, implementable solution. If it's as useful as I think it could be, there will be a flood of modules, libraries and packages developed with efficiency, UI and domain specific improvements.
I'm a bit confused by this idea, maybe someone can clarify. AFAIK, genetic programming/algorithms themselves require some degree of tuning in order to be useful in a reasonable amount of time and for avoiding local minima each generation, via mutation rates, population size, randomized simulated catastrophe, etc. That being said, how does something like TPOT actually ease the pains of tuning the parameters of a different model, if it requires it's own tuning?
"This package provides users with methods for the automated building, training, and testing of complex neural networks using Google's Tensorflow module. "
I'm not particularly surprised that it's written in Python.
- Lua/Torch appears to not be designed for distributed systems as much, and looks less mature in terms of the scientific computing side.
- PHP is clearly the wrong tool for this job - it's only really the right tool for websites (debatable).
- Could do Node.js, but has little/no data-science/scientific computing side.
C++ would be the only reasonable competitor to Python here I think. Python is a pretty good combination of great for science, good for servers and easy to pick up and use for all the engineers who will need to interact with it.
In the article they mentioned they implemented custom type system for building UI and for the input/output. Interesting, but I don't understand what that means. Are we talking about data type? Custom compiler? Very very elegant ideas I supposed, but I am just not familiar with the concepts...
> The body of the workflow looks like a normal Python function with calls to several operators, which do the real machine learning work. Despite its normal appearances, FBLearner Flow employs a system of futures to provide parallelization within the workflow, allowing steps that do not share a data dependency to run simultaneously.
"Futures" did they implement this with concurrent.futures (which also has a backport prior to Ptyhon 3.2)?
> Operators: Operators are the building blocks of workflows. Conceptually, you can think of an operator like a function within a program. In FBLearner Flow, operators are the smallest unit of execution and run on a single machine.
> Channels: Channels represent inputs and outputs, which flow between operators within a workflow. All channels are typed using a custom type system that we have defined.
When you think about it. What do NNs fundamentally do (once trained)? Map inputs to outputs, right? So they behave like functions. In type theory functions are intensional and we explicitly construct them. With NNs we have our ML algorithm and construct them based on training data and feedback. With regular functions we have explicitly constructed A -> B. Let's represent NNs as A ~> B. Let's represent channels as A||B. Say we have two operators: E ~> F and G ~> H then to join them,
So E ~> F||G ~> H we'd have to construct a channel F||G.
At least this is how I imagine they mean things! Corrections from FB most welcome!
My understanding is that a @workflow decorated function is required to return a DAG of operators. i.e. no actual "calculations" are performed within the function other than whatever "setup" calculations need to be made to populate the operator parameters.
I could be wrong, but the Facebook team has built an Python internal DSL for building DAGs of operators in a procedural manner, helping maintain type-safeness between the input channels and the input-and-outputs of each operator.
Really cool idea.
Because all the channels and operators have typed interfaces, it means they can then materialize UI for configuring any collection of workflows. React would be ideal for this, mapping types to React components. Would be awesome to know if they built up a GraphQL layer on top of the workflow config parameters to get really elegant frontend-backend binding.
So, everything that i hate about FB is related to the AI.
Ranking and Personalizing: Horrible ranking results and the "personalized" results that they push into my feed have nothing to do with me, my interests and is not something i would ever want in my social feed. Its almost as bad as their advertising results (which im guessing is based on the same AI).
Offensive Content: I could not give 2 f@@@@ about this, but i guess some people don't like strong language or pictures (only point that i see as acceptable use of the AI for now).
Trending Topics: FB, you are not Twitter, please stop trying to copy them. If i want trending topics I'll go to Twitter. My Facebook is a way to contact people i have not talked to in months.
Search results etc: What search??? Facebook as a search engine now? Or are you talking about the BS do you know John? You should know john! - FB: If I have not added John, it might mean i dont want to talk to John cause he is an a@@. Until your AI learns to detect aholes then please switch off this crap.
I know its a rant, but Im so tired of services trying to fill my feeds with crap, and also the crap BS about AI saving / ending the world. It might create nice cool articles for the news, and buzzwords in the bubble out west. We can only hope that Winter will not come this time, and we can advance far enough that AI becomes an actual useful product for more then spam filters, and detecting if John went to the same school as me.
* John is an invented person based on people i know. * Swearing is blocked as I'm not sure what people prefer on here. * Yes, I know AI is advancing a lot, and I'm a big supporter. I just don't like how its used, and how the PR makes it look like its the 7th wonder of the world.