Is it facial recognition that's the problem here, or the policy itself? In theory, they could hand out pictures of everybody who wasn't allowed in to the security staff and use low-tech facial recognition to enforce the same policy - assuming it was scalable, would it be OK then? I remember reading that IBM got into trouble when it first computerized its personnel files because some exec noticed that the computer could scan through all the personnel files and fire people who were close to retirement to save on paying out their pensions. It would have been prohibitively expensive to pay a person to do that, but with the computer, it was a quick SQL query (or whatever query language IBM used back then). The problem wasn't that the personnel files were computerized, it was that they were being used in a very evil way.
You're right: it's the policy that's bad. However, facial recognition is what makes enforcing that policy possible, and society and the law haven't changed quickly enough to deal with the repercussions.
For instance, say 18 year old "Joe" gets caught stealing soda from a fast food place. The manager kicks him out and bans him from the store. Fast forward a decade, when Joe walks back into the store with his wife and kid to buy a quick meal.
Without facial recognition: Joe's grown up and matured. The manager was either replaced years ago, or doesn't remember the incident, or vaguely remembers but doesn't care about it anymore. Joe and his family pay, eat, and leave.
With facial recognition: the system notices that someone on its "do not allow" list has entered the store and summons police to deal with the trespasser.
When human judgment is involved, we rarely deal with absolutes. A lifetime ban isn't really for life. It's until both parties grow up and the situation cools down. A ban on a competitor's employees isn't absolute. Maybe you won't serve the owner, but if his dishwasher comes to your place on a date, you're not gonna hassle the kid. We're really, really bad at designing automated systems that handle nuance. It's way easier to write code like `if photo_hash in banned_people: ...`.
Can you imagine getting into an argument with a bartender and being asked to leave - and then you can't go into any other bars either? Yikes!
Reminds me of the hidden low-level-banking-employee blacklist that some would end up on for not forcing enough customers into expensive products they didn't need.
Or the opiate painkiller registry that doctors use to centrally track folks who come to them for painkillers, and which by the way counts any painkillers you might have gotten from the vet for your dog.
Blacklisting is something we already recognise as wrong; banning who you want from your own establishment is fine, but passing around a list of people and banning them on someone else's say-so isn't. Perhaps the laws against it should be enforced, or strengthened.
Except sometimes an establishment is part of a multinational conglomerate. Eventually some level of due process is going to be needed in certain instances.
I think in many contexts this would be desirable. If someone was belligerent and gets banned for starting a fight on Delta airline, I wouldn't want to sit next to him on my United airline flight. Venues should have the freedom to use whatever judgment/algorithm they feel will keep their customers safe, including sharing lists of troublemakers. If the list is too inaccurate, it stops doing its job and becomes a net-no-help to the venue using it.
That's how you end up with redlining or worse. It's too easy to slip a few "undesirables" into the list you're sharing, and as long as you don't do it too often then your list remains a net-positive so people keep doing it. (See also the guy who put his ex-wife on the no-fly list).
For something like starting a fight, you can go by actual court records, which are public and have processes in place to correct them if they're wrong (and we have rules about e.g. when convictions should be expunged). Just sharing lists of names is too abusable.
This list amounts to a social credit system that some cities in China have implemented. I always thought that was something we’d like to avoid in the US. The fact that it’s privatized in the US and a government list in China makes no difference: is it any different to be blocked from taking a bullet train in China than to be blocked from flying in the U.S.?
I have several concrete problems with these lists.
1. Secrecy. I should have a right to know if I’m on the black list.
2. Due process. There is no process to being put on the black list. Partially because you don’t know it’s happening, there’s no way to contest it.
3. Permanency. The punishment should fit the crime. If you do something at 18 that shouldn’t be a lifetime punishment. However being on a secret list of names and faces distributed between companies is a lifetime punishment.
> If the list is too inaccurate, it stops doing its job and becomes a net-no-help to the venue using it.
This is an extremely optimistic take. In reality, what is the threshold of false positives where the list stops being useful? One percent? Five percent? Hell, even if the list is 10% wrong you’re still denying people who should be denied 90% of the time. And there will always be a stigma against people who are on such lists. “You must’ve done something to get on that list.”
I didn't say anything about farting. I don't want to board an airplane with someone who has a history of assaulting people. If airlines could build an accurate list, I would have no problem with them sharing the list and effectively banning those people from flying. The "accurate" part I admit is the hard part to get right. As another replier suggested, maybe base it on public arrest records or something. But assuming it could be done accurately, of course it should be implemented. Do you want a loose cannon who has assaulted people in the past on a tin can with you at 50,000 feet?
This isn't about having your best day. I've had many bad days, yet somehow never assaulted anyone.
And I'd rather live in a society where the punishment for assault is meted out by a court of law, via a trial if necessary or appropriate, and not via some privatized blacklist with no means of due process.
> I don't want to board an airplane with someone who has a history of assaulting people.
> Do you want a loose cannon who has assaulted people in the past on a tin can with you at 50,000 feet?
If this someone paid whatever price a competent court of law imposed on them, then I have no problem boarding a plane with them. People make mistakes, even grievous ones, and even ones that you or I wouldn't make on our worst of days. The law encodes the penalties for those. Beyond that, what you're advocating for is extrajudicial punishment.
Maybe farting isn't on your list. Maybe it's on somebody else's list.
That's a key problem here. People don't agree on what's acceptable and what the proper criteria for forgiveness is.
If somebody did one of your more unambiguous transgressions, like assault (proven to a jury beyond a reasonable doubt, i hope?), but reformed themselves, how do you know and who decided that? Maybe they did it because of a drug habit, mental illness, some other extreme condition that they've now worked past. Maybe they really don't want to do it again, and would not.
These are the types of problems that can arise when you split the world between good and bad people. Not everyone who does something you disapprove of is irredeemable.
A big problem here is, what is the airline's incentive for the list to be accurate? If it has a whole lot of false positives, in absolute numbers it's still too small to put any real dent in their business - there are millions and millions of other customers. What you really need is transparency, oversight, an appeals process, and guarantees that human judgement is in the loop - all things that come "for free" with the court system, but would have to be recreated from scratch in any corporate-run extrajudicial penalty system.
Schools are using id scans to ban people and prevent sex offenders from entry (yes.. It's good.. But some of those people have kids enrolled). This has been rolled out heavily already. (I believe this is called rapidscan)
Please keep in mind that some people on the sex offender registry are only there because they were peeing in public, or because they were sexting while underage.
Scale is also an issue. So many things are either owned by a megacorp or run via a centralised platform. Getting banned by a store pales in comparison to being banned by Uber, Google, Visa, etc.
Just as most websites today use captchas run by one or two companies, we may see a scenario unfold where most brick and mortar businesses use a facial recognition blacklisting system run by one or two companies. Get labelled as trouble by the system and you'll be banned by every store in town.
That’s a great point. For that matter, imagine that being maintained by Verisign or the like. Get a ding on your social credit report, and voila, you can’t walk into a grocery store anymore.
What’s weird to me is MSG had to get pictures of all this law firms attorney’s and enter them into their facial recognition database. It seems wasteful and really petty. They probably pulled from they law firms website.
> A lifetime ban isn't really for life. It's until both parties grow up and the situation cools down.
This is especially worrying when no humans are involved. If your Google account gets banned by some automated bot for whatever reason, you might find any attempts to create new accounts be considered "ban evasion".
Regardless of the fact that you have very few options for things like ad networks or publishing Android applications, even before you consider losing access to your e-mails or any servers you might have.
I’d amend that: it’s always the policy that should be modified in the longer run. Sometimes it’s wholly appropriate to pump the brakes on the tech that would allow mass application of the bad policy.
It rarely if ever actually works out for the simple reason that, if one jurisdiction pumps the brakes, there's a dozen others willing to floor the other pedal on the same thing.
Why is the policy bad? They are preventing employees from a law firm in active litigation with them from attending their venues while the litigation takes place. It's not some draconian thing.
I'm certain you didn't read the article. The link between the woman and the litige case is very tenuous.
She doesn't work on the case, and the venue has nothing to do with the case either, besides that a huge corporation owns both the venue and the restaurant under litigation.
I did read it. MSG notified the law firm of their policy while the litigation is ongoing, twice.
> "MSG instituted a straightforward policy that precludes attorneys pursuing active litigation against the Company from attending events at our venues until that litigation has been resolved. While we understand this policy is disappointing to some, we cannot ignore the fact that litigation creates an inherently adverse environment. All impacted attorneys were notified of the policy, including Davis, Saperstein and Salomon, which was notified twice," a spokesperson for MSG Entertainment said in a statement.
(a) Conlon does not practice law in New York where Radio City Music Hall is located.
(b) Conlon is not an attorney pursuing active litigation against the MSG Entertainment. She works for a NJ-based law firm who representing another party in litigation against an unrelated restaurant which now happens to now be owned by MSG Entertainment. She's not part of that ongoing litigation.
(c) > A recent judge's order in one of those cases made it clear that ticketholders like her "may not be denied entry to any shows."
(d) > "The liquor license that MSG got requires them to admit members of the public, unless there are people who would be disruptive who constitute a security threat," said Davis. "Taking a mother, separating a mother from her daughter and Girl Scouts she was watching over — and to do it under the pretext of protecting any disclosure of litigation information — is absolutely absurd.
Refusing her entry doesn't even make sense according to their stated policy, and it is absolutely _draconian_. She doesn't work on the case—she just happens to work for the same company. If this firm was representing a client suing Meta or Google or Apple, would it be okay for Meta/Google/Apple to ban all attorneys from using all of their services? This type of behavior just discourages firms from taking on clients suing large companies.
The law firm was notified ahead of time, twice. Lawyers expect others to abide by such notifications, do they not? They act like this was a crime against humanity, when they were told ahead of time that they were not allowed in their venues during the ongoing litigation.
Does the law firm have 100,000 employees? According to their website, they have about 29 attorneys in the firm. Bringing up companies the size of cities compared to that is completely ridiculous and irrelevant. And companies like Meta, Google, Apple, etc. will absolutely enact draconian policies when IP and other litigations are going on. The secrecy and policies those companies put in place likely go well beyond simply not letting a lawyer part of a law firm that is suing one of your businesses into your building.
Why does it matter where she's licensed? It's irrelevant.
Lawyers play these little games all the time. I'm not necessarily for the policy or the use of facial recognition to enact it, but they were told ahead of time. They should know better. If they wanted to argue the points ahead of time and get approval, they could and should have.
How is that relevant? A stupid and harmful policy is stupid and harmful, regardless of who was notified and how. If I send an email to Google to notify my displeasure, it is not reasonable to expect all Google employees to be aware of it and avoid my business.
Despite the fact that personally punishing individual employees for a beef your holding company has with some of their colleagues is stupid. There is no other way of putting it.
> Lawyers expect others to abide by such notifications, do they not?
There is so much wrong here. Lawyers are not omniscient. They also expect companies to abide by their own terms of use, and routinely ignore unfounded or groundless “notifications”.
> Bringing up companies the size of cities compared to that is completely ridiculous and irrelevant.
So, where’s the limit? What company size makes this reasonable?
> And companies like Meta, Google, Apple, etc. will absolutely enact draconian policies when IP and other litigations are going on.
So, it is draconian after all. Show an example of individuals being booted off Google’s or Apple’s platforms only because of their employer.
> Why does it matter where she's licensed? It's irrelevant.
But then, none of the points you’ve made are, either.
I started out reading the article wondering 'OMG, what is it that this mom could've possible done that she's banned?' and then when it says "lawyer at adversarial law firm" I immediately switched to "oh yea, makes total sense".
None of the counterarguments here stand up to scrutiny.
"Isn't involved in litigation" and "not a Security threat" - she is totally a security threat: today she's a mother of a girl scout and is not working on the case, and tomorrow she's a loyal employee helping out with the case.
"Part of girl scouts trip" - Does she have a firewall in her brain between personal and professional?
'Conlon said she thought a recent judge’s order in one of those cases made it clear that ticketholders like her “may not be denied entry to any shows.” - You know what the best thing about America is? Our endless appeal system.
This entire article is a non-story (someone denied access to a business), and yet, it's somehow making the rounds. The paranoid cynic part of my brain is interpreting this entire situation as "A law firm that sent in an employee for some snooping, and then got caught, and now is making some noise in the papers." The lady doth protest too much, methinks.
What she's alleging seems to be: the MSG conglomerate is using their large footprint to punish law firm employees unrelated to their dispute using venues also unrelated to their dispute. Doing it out of spite sounds possibly legal, if petty. But the other possible intention would be to try to dissuade law firms from taking a case against any MSG property, to try to deny legal representation to the plaintiff. Not a lawyer, but surely there's a law against that?
"Unrelated to their dispute" is a fuzzy category. How would the venue actually know that she's unrelated to the dispute? They're not going to have access to the internal management chart of the law firm in order to be able to precisely delineate in every case exactly who gets admitted and who doesn't. And it's unreasonable to force them to create some sort of bouncer appeals board to let people present evidence to show that their position at the law firm is unrelated to the dispute.
Remember everyone arguing that Twitter pre-Musk is a private company, so they could ban anyone they wanted? This is the same thing, only in a physical location.
> How would the venue actually know that she's unrelated to the dispute? They're not going to have access to the internal management chart of the law firm in order to be able to precisely delineate in every case exactly who gets admitted and who doesn't.
The venue knows not, but that venue is a tiny cog in an empire owned by the parent group.
The parent group compiled the ban list by trawling the large law firms website for images .. and the parent group knows which offices and groups of personnel are involved in a specific case.
With large and potential trans national groups doing this it has a parallel with, for example, one country banning an entire countries citizens from entry or doing business .. on the basis that a small group of citizens took action that was undesired.
Very large companies have very large numbers of employees and many different activities on the go.
Should, for example, several thousand people be banned from watching streaming television because 15 people in the company they are associated with are involved in a class action against a media group?
> The parent group compiled the ban list by trawling the large law firms website for images .. and the parent group knows which offices and groups of personnel are involved in a specific case.
Do they? I don't work in law but at every company I've worked we adjust who is working on what based on needs at the time. Why wouldn't a law office temporarily shift more people to a case if they needed some extra manpower?
Assuming that whether or not the lawyer is working on their case is the deciding factor, the venue (actually the conglomerate that owns them) are the ones who chose to come up with this retaliatory scheme so the onus is on them.
Calling not being able to be a consumer of the company their firm is suing punishment seems to be a bit of a stretch. And it's only during the duration of the lawsuit.
And again, why are the lawyers so surprised when they knew ahead of time? If they had asked, it could have even been pre-approved, and thus a non-story. If anything, I'd almost consider this to have been an intentional act by the law firm because they knew ahead of time and took their Girl Scout troop anyway, knowing it could look bad for MSG.
> Calling not being able to be a consumer of the company their firm is suing punishment seems to be a bit of a stretch. And it's only during the duration of the lawsuit.
What if the company was Google? What if it was a healthcare provider with a patented/proprietary treatment?
As a matter of fact, didn't we recently have articles in hn where people were commenting they are reluctant to charge back to Google because they don't want to risk losing their gmail and the rest of it?
> I'd almost consider this to have been an intentional act by the law firm
Good for them. The legal system is the only way corporations can be effectively held accountable. You can hate lawyers as much as you want but this is directed at us via proxy. Lawyers litigate for clients.
"Sorry we can't take your case. We use Google products extensively."
For whatever reason, I’m just not seeing it. Everyone keeps bringing up Google. Why? They are in a completely different industry and largely irrelevant from what I can see. And like I said, those companies will absolutely shut stuff down during IP litigations. When I was at a company being sued and vice versa for IP infringement, the entire company was told not to use the other company’s software products, and if you absolutely had to, then you needed to apply for specific one-off permission. That happens all the time, and it’s practically the same thing.
The law firm is a personal injury firm, which in my experience and understanding can be (not always) very shady. Why is it required that MSG let lawyers suing them come into their venues while being sued? One could argue that the policy should be targeted towards certain venues and lawyers, but that is a lot of overhead that is solved by a simple, blanket policy.
I honestly don’t see the outrage here. Sure, there are a lot of what ifs that make this seem worse, but those hypotheticals are not what seemed to happen here.
And it’s the law firm showcasing punitive action. They’re now suing MSG for the denial for something that basically seems like a stretch of a loophole. I almost would guarantee the law firm did this on purpose, and that’s why I can’t stand lawyers. They don’t play by the rules everyone else has to, and they get to make the rules.
This is not a black and white issue, and there are valid points that can be made pro or con of either side. Even the likely possibility (I agree with you on that) that this was all planned does not change this. This fuzziness of the line that would obviously delineate right vs wrong is the issue.
That is why I added that "proxy" bit in there. One could argue for the position of law firms or the position of corporations, and I am urging you to now consider it from the pov of lonesome you, the possibly innocent bystander, caught between these two powerful social forces. You may still reach the same conclusion but it is a distinct analysis and you should do it if you haven't already.
It's obviously a terrible thing if firms stop blocking people who might have some problem. It's going to be used in terrible ways. I hope that because lawyers are involved they'll manage to destroy this as a general thing companies can do.
This is kind of an ironic case because I've noticed that lawyers make themselves immune to non-compete clauses via state laws in most states. In California famously, regular employees are generally immune to N.C. In my state lawyers are not impacted by NC by law, but regular devs are subject to them, even sandwich makers have been blocked from changing jobs. There's been a big battle from devs to get rid of them, it hasn't yet passed the state legislature. My own leg rep said she didn't think there was a problem - of course she's a lawyer. The lawyer and business class wants to keep them.
Because employees are not their employers, and have a life outside of work in which they still do stuff? Seriously. Yes, it is draconian.
Besides, if the law firm really wanted someone in there for hand-wavy reasons, why would they not just hire someone for it? Blacklisting is petty, counter-productive, trivial to abuse, and solves nothing.
So what? There are a lot of laws and policies based upon where you work that restrict you in your personal life. The lawyers can get over it. This is a non-issue, and I suspect the law firm did all this on purpose to drum up another lawsuit against MSG.
Retaliating against a law firm's employees for doing their jobs by representing their clients is absolutely draconian. Especially for a large company doing this with unrelated properties.
If it were protection .. to prevent a lawyer from witnessing something at a public venue that might be used as evidence agains them ... then there is no protection possible.
A sensible law firm would not use one of their employees with a public image available, they'd use a contracted private investigator to gather evidence.
This smacks of punitive behaviour to not only punish the law firm (for doing their job) in a petty manner, but to also intimidate any other law firm thinking of taking them on.
Eg: Hypothetically if Disney were to ban all members of any company that challenged Disney on ride safety .. how would that appear?
Protection from what? The lawsuit has nothing to do with the MSG venue, but rather with a restaurant somewhere else that MSG owns; and this lawyer isn't working on that case anyway.
What is an easier policy to enact? A subset of people that they can’t exactly determine for a subset of their venues, or a blanket policy covering all employees of the small firm at all their venues?
Making the ban be for fewer venues would be way easier.
Or just, like, not doing this at all. This is James Dolan, the owner of Madison Square Garden, being petty as usual and banning people from the venue who annoy him. e.g. Spike Lee for saying bad things about him, Woody Allen for the wrong reasons (he refused film ads for Dolan), &c
I am under no obligation to contort myself to assume good faith on his part in the face of a pattern of behavior that indicates otherwise.
I'm not sure I see what's unfair here. They were notified. If they wanted to negotiate around it, they could have done ahead of time. They're lawyers. They should know to read material they're sent.
I see it as things returning to how they were. In a small town, the person running the fast food joint often did work there for decades and everyone recognized everyone. Theft was hard to get away with because you don't just blend in to the masses like a modern city. Now tech is allowing us to hold people accountable again.
I think that’s a romanticized memory. While holding people accountable is good, so is eventual forgiveness. If you steal a candy bar from Old Man Smith at the town’s only gas station, and he’s the kind to hold grudges, you’re boned. Never mind teens trying to buy condoms when every cashier in town knows them.
There’s a happy medium in there somewhere. You don’t want to let repeat offenders off scot free, but there needs to be a cooling off period, too.
The problem with automated facial recognition technology is that it makes what used to be infeasible suddenly routine and cheap. In the past, would the security folks at Madison Square Garden have even thought about implementing this kind of policy? No, of course not; it would have been obvious from the very beginning that such a policy would have required far too many resources to feasibly implement. At best, they'd have been able to hand out pictures of a few folks, known to be bad actors, and told their security personnel, "Hey, watch out for these people and don't let them in." It's highly unlikely that a random obscure attorney working at the same firm, but not specifically tied to the litigation at hand, would have made it onto that list.
However, today, with facial recognition, it's possible for Madison Square Garden to have blacklists consisting of thousands, or even tens of thousands of people and check against them just as quickly and as easily as if they were a list of a dozen people. That's a qualitative change, and I think it's valid to treat it as a separate kind of thing than a bunch of security guards, each with a stack of photographs.
It’s already happening. You can’t leave most municipalities without and LPR scan and interstate corridors are covered by face recognition capable cameras. The Feds can track your progress from Maine to Miami in near real time.
Imagine muggers with this tech. Imagine gangs with cameras guarding approaches to their turf – watching for rival gangs, cops, and people who have been placed on the all-gang database of marks and narcs.
> Is it facial recognition that's the problem here
Yes I think so. Some technologies are inherently no good, lacking
any obvious positive use cases, but enabling obvious abuse. It is a
mistake to shift the blame to "policy" or malevolent actors and claim
the technology is merely a neutral enabler. No. The technology itself
contains a gamut of evils.
If you had to identify the eventgoer in advance for a non-transferable ticket, then she would have been banned for the same reason and no facial recognition would have been involved. Would that be okay, in some way this is not?
Are you asking me personally? Or to respond with a general
philosophical principle?
In either case no.
On the first count, no, because I don't believe in
"non-transferable" tickets for arts and entertainments. The holder
has every right to give or sell their purchased property to family or
friends and "non-transferable" instruments violate that first-sale
right.
On the second point, no, because the singular specific "okay" case
(even the tear-jerking one that saves poor fluffy dying from kitten
cancer) has no bearing on the profound but nebulous harms inflicted on
society by an essentially rotten technology (on a purely Utilitarian
principle)
Both. Id argue that in addition to the policy being bad, she probably didnt agree to her image feeding into the dataset backing the facial recognition, which raised some spicy privacy questions: how did this private company get the image? Under what terms of use?
I think anyway, IANAL but I believe this law prohibits the collection of, in this case, images/facial recognition, etc.
So, you can take my picture in public and keep it on your phone, you cannot upload it to say Instagram without my permission, all kinds of recent lawsuits about it. I think the intent of the law is to prevent my picture from winding up in say Instagram without my permission but for sure it cannot be used in the way it was used in the article - to scan it and prohibit me from entering an establishment. Apparently, no matter what state I go to....
I’d argue the technology takes what was, at one time, a relatively benign and reasonable policy and weaponizes it.
There’s a level of enforcement where a reasonable policy or law becomes unreasonable.
Cameras on every street corner enforcing every infraction, no matter how minor, is an extreme. There’s a threshold before that extreme where the policy combined with the enforcement becomes unreasonable.
I think the combination of policy and enforcement outlined in the article is well over the line.
The problem is not that it provides new opportunities, the problem is determining which of these opportunities are acceptable to pursue, and under which circumstance.
I don't like the "this is bad because it allows current laws/practices to be efficiently enforced" attitude that a lot of people seem to hold.
Oh no, in my opinion, by all means, make it 100% efficient, cheap and easy to enforce _EVERY_ policy under the sun.
We need to get on with cleaning up and amending policies until it's not a complete shit show.
Any policy should be written as if it will be effectively and quickly applied to 100% of the cases, and if that makes it look shitty, then it is and must be fixed.
The idea that it would be prohibitively expensive to manually look at each employee’s age on paperwork compared to the cost savings of denying pensions is extremely dubious.
Is it facial recognition that's the problem here, or the policy itself? In theory, they could hand out pictures of everybody who wasn't allowed in to the security staff and use low-tech facial recognition to enforce the same policy - assuming it was scalable, would it be OK then? I remember reading that IBM got into trouble when it first computerized its personnel files because some exec noticed that the computer could scan through all the personnel files and fire people who were close to retirement to save on paying out their pensions. It would have been prohibitively expensive to pay a person to do that, but with the computer, it was a quick SQL query (or whatever query language IBM used back then). The problem wasn't that the personnel files were computerized, it was that they were being used in a very evil way.