However, there is a differnce. If I dont get a job offer becuase I dont fit a humans biases of what they want. I can go somewhere else with a human that has differnt biases. The problem is with AI is how many googles are there? We will liekly end up with a handful of companies that rate you. Suddenly it does not matter where you apply for a job. If you did one thing that ruins/lowers these ratings you may not get a job. The problem is only a handful of entities decide what good is. If everyone is using these systems and you fail to meet their definition of good person/hire you like may suddenly become difficult.
Where as before people would have decide that themselves even if its a shallow cursory judgement. At least people are makeing their own decisions instead of blindy following a number/evaluation with criteria they do not know.
Well, that's just part of the problem with the connected world. If you do something wrong it doesn't take machine learning or AI to find out, you just have to google it. Someone hiring you for a job will most likely do this anyways... I have had recruiters even call me out on such things. It has nothing to do with an algorithm.
It could be problematic if there was only one source of subjective information, especially if any perceived past transgressions were irredeemable. But if they are factual, do you have a problem with that? E.G, you go to hire somebody and find out that it's not recommended by hiring_hueristic_x based predominately on factors X,Y,Z (X being he stole from his past 10 employers, Y being he never held a job for more than 6 months, Z being he commonly posts serious threats about people he doesn't like online).
Also, do you have any idea what actually occurs when you make a decision? I don't. I would love to know someone who does. I still follow it.
Did I say I have a problem with people making a decision based off information. Problem is these AI systems will be making decisions and if everyone is only using a handful of vendors and there software just says bad canidate or gives you a low rating. The people are just going to default to that. What it boils down to is small number of systems making decisions.
It turns into a small number making decision for many. Where as without the system it is decentralized and eveyone is making independent decisions. There also more room for nuance.
Do think conpanies are going to develop there own AI and gather datasets on people just evalute potental hires. Train it for the requirements they feel is best for there company?
No they are going to out source it. So what is likely going to happen is only small number of systems making these decisons.
Where as before people would have decide that themselves even if its a shallow cursory judgement. At least people are makeing their own decisions instead of blindy following a number/evaluation with criteria they do not know.