Hacker News new | past | comments | ask | show | jobs | submit login
What the Boston School Bus Schedule Can Teach Us about AI (wired.com)
99 points by ForHackernews on Nov 14, 2018 | hide | past | favorite | 76 comments



Am I the only one who reads this and gets the takeaway of: "Journalist realizes he was on the wrong side of defending a special interest group against the greater good via lack of due diligence, tries to backpedal and paint himself in a positive light via 'learnings'"

I think I'd be less cynical if they had done less to bury the lede deep in the midst of the article, and not once mention "due diligence," let alone as the crux or conclusion that I took as the real Root Cause here.

To me this isn't a story about AI. This is "just another" story of how readily public decision making can be swayed by biased special interests and an uninformed media/electorate.

The author should be praised for revisiting the topic, being upfront about their change in stance and trying to close the loop on it; but I think I am seeing the problem space from a very different angle than they are.


>Journalist realizes he was on the wrong side of defending a special interest group

Author of the article is Joi Ito, director of the MIT Media Lab. I wish he had spoken to the original creators of the algorithm (down the street) before co-authoring the article.

But I give him credit for this kind-of a mea culpa w/ additional context.


Yes, and I only read the first two pages because it made me a little ill how shabbily they came to their initial conclusion - anyone with an inkling of knowledge of the history of the USA and school bussing would have approached this more carefully.


You nailed it, though I'm happy he wrote the piece at all despite it being a secret "mea culpa" piece.


Agree. This brings increased attention and news to an issue that would have been forgotten. It speaks to Joi's integrity that he wanted to correct his mistake publicly.


What's secret about this?


Yea, it really seems like the author originally just flipped out at the word "algorithm" and is now backpeddling after realizing he was pretty uniformed.


I'm having trouble untangling the story from reading it. Who was the special interest group here, what did they object to, and what was the misunderstanding about the algorithm that the journalist harbored?


Wealthy parents (who likely drive their kids to school) objected to changes in their children's school start times made as a result of trying to optimize bus schedules.


Yeah but why? Is there more traffic at those times? Does it conflict with other activities that the parents plan to do?


If school starts later, then they may be dropping their kid off at school at 9am or 9:30am rather than at 8am. This makes it hard to get iinto the office in a reasonable time.

Its also not quite the same problem for those of us who ride the bus, parents aren't trapped schleping kids on the bus. I'm surprised Boston Schools don't use the local mass transit system for middle & high schoolers, we've been pushing for that change in Seattle for years (all highschool students get ORCA cards finally). Why needlessly duplicate service?


They do have MBTA passes for middle/high school students to take mass transit (https://www.mbta.com/fares/reduced/student-charliecards)


This looks to be a program for a subset of students. I'm specifically referring to giving every student a transit pass. Familiarize them early with mass transit, de-normalize driving kids to school.

Even in the well to do neighborhoods of Seattle, most of the students from upper income families are riding the bus to school after elementary. Driving your kid to school is very odd, even for the most wealthy. That being said, the private Catholic schools in Seattle are a whole different ballgame, they're singlehandedly bringing back long gone diseases as the archdiocese up here is very permissive of anti-vaxxers.


The 'subset' of students is middle-school and older who are eligible for busing. If you're in elementary school or in walking distance, the school won't buy you a pass. It's opt-in and busing is the default, but from 6th grade on most kids take the T.

I don't think driving your kids to school in Boston is unreasonable. Some school buses were chronically late, delivering kids to school up to an hour after the start of the first class.

It may surprise you to learn that Boston is not Seattle. For example, the public schools here are the ones with antivaxxers and diseases you learned about playing Oregon Trail.


Are you suggesting wealthy white kids should take public transportation?


why do you assume the opposition came from uninformed people?

Do you think they were not able to identify what is in their best interest?


Parent didn’t say the opposition was uninformed. Rather, the media and the broader electorate was.


ok


For me this story drives home a point: marketing or story telling matters. They had this model that delivered on a multitude of goals (more fair, better for health, fitting school times for different age-groups, less costs), but got lost in the narrative that was made by others.

Having appropriate attention to the story you are about to spread matters, especially for bureaucrats and scientists. These job groups are less inclined to think marketing matters than, say, business people.

I see this a lot in local bureaucrats. Even the most well willing are usually not thinking in terms of customers, focus groups and targeted information packs. They should. Think about what your goals were and communicate them. If you suspect opposition, create targeted narratives. Heck, call, mail or email those groups most negatively impacted. It's what good businesses do well, but most organizations fail at.

A sidestep would be that all those goals met, including costs savings while getting some kids home at 1pm and others at 4pm has another problem: a too low budget. In my part of the world schools just end at 12.30 or 3pm and parents just need to figure out the rest. Of course that rest is subsidized childcare.


> Optimizing the algorithm for greater “equity" also meant many of the planned changes were "biased" against families with privilege.

I'm not sure what this means, and I wish the article had gone into more detail. If it means "the preferences of families in higher-income neighborhoods were weighed the same as those of those in lower-income housing", that seems great, and uncontroversial. Those families probably had an outsized voice in the old scheduling process and so will do relatively worse in the new one, but oh well, that's life.

If instead it means "the preferences of families in high-income neighborhoods were intentionally given less weight than those of low-income families [to discount their privilege]," that's rather more ethically questionable.


It's worth noting that any group that has an outsized voice in the old process generally has an outsized voice in all local political processes. They will not treat any model that makes them "relatively worse in the new one" with an "oh well, that's life" attitude, they'll consider it as harmful to their interests (which is objectively true) and use their influence in the political processes to keep any such model from being implemented.

Similar aspects appear in many automation scenarios in many industries. In general, if you want a system to be adopted, it needs to provide at least some benefit also to those who are in power; if the current status quo is better for them, then the planned changes can be (and usually are) sabotaged. There are many cases when you can get a win-win where the changes benefit everyone; but if they put the privileged groups at a disadvantage (e.g. hospital systems that significantly reduce work for nurses but add some inconvenience to administrators or top doctors) then it's an understatement to say that you'll have an uphill battle. It's unrealistic to consider the possibility of some proposed changes purely on technical or ethical merit - you can ignore politics but politics won't ignore you.

A key step in implementing any changes is to identify all stakeholders (especially if they may have conflicting interests) and identify which of these stakeholders have critical influence and (possibly) a de facto veto vote. If you can't sell the proposal to these stakeholders, then it doesn't matter how good your proposal is and what the others think (unless they can influence the opposition), it's not worth the effort to proceed further because you won't succeed without having them on board.

Perhaps it's worth looking at it as a Chesterton's fence situation. If you come from outside, unaware of the local situation, and find that the system is biased to provide a nontrivial advantage to group X, then before action you should ask, how come? Why is the system set up this way? It may be that this is accidental, but more likely than not it's an indicator that actually (no matter what the official, documented, publicly disseminated policies might say) group X has the ability to decide, or at least influence, what the policies are going to be. And that's valuable information, especially if you want to change that. Let's not put the cart ahead of the horse - you won't reduce the influence of group X by changing that policy, rather you'll need to reduce the influence of group X before you'll be able to change that policy.


It means the former. The algorithm was tuned for equity, where equity previously did not exist.

To those with privilege, this will feel like bias against them (which is how I read the scare quotes above - feels like but is not actual bias.) They are losing an advantage they once held.


When You're Accustomed to Privilege, Equality Feels Like Oppression: https://www.huffingtonpost.com/chris-boeskool/when-youre-acc...


"Privileged" is often used as a way of saying "white" without sounding kinda racist.


No "Privileged" means having advantages that others don't.

Being white is highly correlated with being privileged due to historical circumstances, but it is only a racist reading of the word that thinks that being privileged is being white.


> No "Privileged" means having advantages that others don't.

When pro-white racists exist and are more common than anti-white racists in positions of influence, being white is, on its own, an advantage other people don't have.

It may also correlate with wealth, parental education, and other sources of privilege, but it is a source of privilege itself.

> but it is only a racist reading of the word that thinks that being privileged is being white.

Being white is, ceteris paribus, being privileged, not the other way around.

And, sure, that's a product of the existence of racism, but it's not racist thinking to recognize the effects of racism.


The author switches from using the term 'white' to 'privileged' halfway through the article to describe the parents who campaigned against the scheduling changes.

>One of the first things they showed us was a photo of the parents who had protested against the schedules devised by the algorithm. Nearly all of them were white. The majority of families in the Boston school system are not white. White families represent only about 15 percent the public school population in the city.

followed by:

>Optimizing the algorithm for greater “equity" also meant many of the planned changes were "biased" against families with privilege.

Since the author didn't clarify what they meant, or differentiate between them I am assuming they used the two terms interchangeably.

One funny thing I noticed. You could argue that since white families only constitute 15% of the public school population these changes disproportionately negatively affect a minority group.


"Minority" is not the opposite of "Privileged".

When we speak of wanting to assist minorities this is when they are disadvantaged minorities. i.e. they are the opposite of privileged.

Please don't just play word games.


Ahh but word games are par for the course in these discussions. A recent example is the humanities redefinition of 'racism' to mean 'power and privilege', instead of 'discrimination or bigotry based on race'.

Don't disproportionate negative consequences constitute a disadvantage? Especially when faced with the existing issues of being a minority in a school system? Who really is disadvantaged here?

You can compound that with the new racism definition and I'm starting to wonder where all this supposed privilege is.


privilege doesn't need to be all encompassing. I'd argue that just being white has some inherent privilege no matter how poor you are. Such as interactions with police. that doesn't mean you automatically do better in life, and you can be less privileged in others areas. white privilege =/= wealth


Also no antidote to how you are viewed in all circumstances see - neckbeard, fly-over country, redneck, hillbilly, white trash.


Interesting article, but some details are vague. The MIT students pointed out that most of the protesting parents were white while while a majority of the school district were minorities. The obvious subtext is that white citizens are having an outsized influence in the city's decision making to the detriment of minorities. But there's another option that isn't considered in the article (though it may have been considered in real life), minority parents didn't like the new schedules either but didn't feel motivated enough, either because of political disenfranchisement or apathy, to try and do anything it. If this is the case, then the whole narrative changes completely.


The article didn’t present it as “white citizens have an outsized influence” but as “the squeaky wheel gets the grease.” A smaller number of VERY upset people makes a bigger impact than a large number of moderately happier people. It seems like the author, the press, the school district etc. overreacted in the face of pressure.


What if the number of people upset with the schedules was actually much larger (and more diverse), but only privileged families felt comfortable or justified protesting the schedule?


It seems unlikely that EVERYONE hated the schedule changes, given that the changes were designed to benefit the majority of them.

We can speculate all day about how people may have felt, but we just don't know. The district should have done polling to see what the overall reaction was before they pulled the plug. And if the counter-arguments amount to "I don't like change" then I think the district has a prerogative to optimize for health, safety, and costs.


That's not my takeaway. The article specifically mentions that the algorithm was biased for "equity", to the detriment of "privileged" families. That would allow us to reasonably conclude that probably white families were impacted more negatively by the changes. Unless I'm misreading?


Based on my knowledge of operations research, there is likely no AI involved in solving any of these problems. This is just a pure math problem, one that someone who knows excel really well could solve. While you may not understand the math involved, that doesn't mean that this is a black box.

This type of work has been put into practice for longer than the author has been alive. I don't think we need to reflect upon the changes it will bring to society since it's already old hat. I will admit that "What the Boston School Bus Schedule Can Teach us About Junior year Undergraduate Mathematics" doesn't have the same ring to it.


Scheduling with constraints is a significantly more difficult problem than you suggest - well beyond junior year and the really general stuff holds open research problems.

You are right about the popular (and, increasingly, technical also) mis-use of the term "AI" but that ship sailed a long time ago.


I am speaking from personal experience. Bus scheduling with constraints was literally the textbook problem I was solving in my Operations Research class during my Junior year in undergrad. I would not classify myself as a genius, did not go to an Ivy league school, and I could do it then. It was on the harder end of the spectrum for my classes, but approachable to any undergrad studying mathematics. If you can pass a linear algebra class, you can learn how to solve these types of problems.


I am also speaking from experience. It's not my area of mathematics but I have a passing familiarity.

There are simplified problems that can be presented to undergrads, sure - just like most areas. Intro courses often use a toy version of this to show integer programming and the like. But there is as reason these are "toy" version - some of the general variants are NP-hard and have connections to interesting and open problems. Many real world variants are not trivial to undertake.

On a general note - if you hear someone who is even slightly plausibly qualified say "Problem X is hard" and you think "Huh, I solved problem X in undergrad" - the overwhelmingly most likely reason is that you aren't, in fact, talking about the same "problem X". This can of course be due to the method of communication (e.g. science journalism)


I agree, solving for the optimal solution is generally an NP hard problem when dealing with real world constraints. The real world is not a word problem in a math textbook.

In practice you can often get away with improving an existing system instead of finding the "best" solution to the problem. This is what pragmatic people do, and it is often a lot easier than we think it is.

If you frame the problem as "finding the best possible solution to the bus routing problem factoring in all the needs of the populace", I would agree that's a very hard problem. If you frame the problem as "can we improve the existing Boston bus routing system", it will likely be a lot easier. Always remember that perfect is the enemy of done.


You, like the other poster, seem to be assuming that nobody has tried "can we improve the existing ..." using the obvious tools. I find this attitude both puzzling and patronizing of the people involved. Without specific knowledge, is seems far more likely the obvious things have been tried by reasonably intelligent people.

As you say, it's what pragmatic people do (and it's usually still more complicated than the intro textbook versions)

Now I have no specific knowledge of this particular problem, but the article does suggest it is something that has been tried before (rescheduling) and failed for various reasons. I'll give them the benefit of the doubt, why won't you?

Really I'm objecting to the idea that "if only we could have found a math undergrad, this would have been improved years ago". That's a particularly naive view.


It is my opinion that these initiatives fail for political reasons rather than on their merit. There are perverse incentives for all interested parties to avoid changing the status quo. Anytime you find a cause with diffuse costs and concentrated benefits it's going to be an uphill battle to get things implemented. As indicated by the article, math was not the barrier to implementing better scheduling. We are able to do the math.


That's quite a different issue, but I agree that this is a common failure mode.


i agree the word “AI” ought not be used to describe this.

the thing that makes these sorts of problems not quite so simple is that there are fairness constraints, not just constraints on routes and schedules. there may also be a requirement that the solution should be chosen in such a way that a rational agent has no incentive to lie about their true preferences. the best way to achieve these many conflicting goals is definitely not settled.


Only for certain types of constraints, however.


Sure, an inclusive headline is "What the Boston School Bus Schedule Can Teach us About letting algorithms decide". Because AI or upper division math, the decision was consigned to a process that is opaque to the average parent. And the opaqueness more or less conceals basic tradeoffs.

The school has figured out that starting HS late-ish and letting all student out before dark provides the best situation for students.

But if all school levels start at the same time, the district has would have to have a bunch of school buses and drivers that wouldn't be used other times.

To avoid all this "waste", the district has to decide which schools to shove into some other, worse schedule. They thus came up with a algorithm to minimize the amount of bad-scheduling but which maintains the existence of bad-schedules. And when it sent an entirely different group than before into the bad-scheduled situation, naturally that group complained - it didn't help these were wealthier, whiter groups with influence.

Sure, you thus have more affluent people saying they didn't want to be the one sacrificed here. But you also ask, why sacrifice anyone? Why not bite the bullet and have a good, standard schedule for all schools? You'd offending the modern god of efficiency but you'd actually be aiming for a greater good.

The article talks about Trolley Car Problems a bit and how people relate to algorithms "solving" these. The thing is instead of saying "that there are sacrifices is given, now we must decide who", which guarantees divisiveness, one could say "let us make choices as a society which mean that no person a priori need be sacrificed."


"that there are sacrifices is given, now we must decide who", which guarantees divisiveness, one could say "let us make choices as a society which mean that no person a priori need be sacrificed."

Your first quote describes reality, your second is nonsense. If there were an outcome where everyone's needs are met and nothing need be sacrificed, there would be no opposition to the changes. Every decision comes at a cost, we don't have unlimited resources so we should reach for tools that attempt to maximize what we are willing to spend. If you believe that no one needs to sacrifice in order to solve society's problems, you are living in a fantasy.


Your first quote describes reality, your second is nonsense. If there were an outcome where everyone's needs are met and nothing need be sacrificed, there would be no opposition to the changes.

Basic income is system that would approximately guarantee all people's basic needs are met yet there is certainly opposition to it.


Your proposals sound to me like someone going to a casino with a surefire winning strategy assuming they have more money than the house.

Please explain (or cite a source that does) how you roll out a basic income in a way that doesn't create new schemes that leave more money on the table to be hoovered up by the oligarchic corporations. I strongly biased to believe that if you add resources arbitrarily to a human society, then a segment of the population will, will, find a way to capture them.


I am not even a proponent of basic income.

The point is that in policy today, "everyone's needs will be met" isn't an automatically a winning argument.


Off topic, but as long as I trust the state, I trust basic income.

I put different amount of faith in different states, but not very much in any state. Not in the long run. The problem is, the state is going to dictate more and more conditions for receiving that income. Couple it with oligarchic interests and no good can come of it.

Problem is, I don't see much good coming from our current systems either, subverted as we speak.


It is true that such problems /can/ be solved via mathematical linear programming (for those coming outside math circles, please note that the term programming here has nothing to do with coding) and this is how they are usually presented in academia. However, in practice, it is not something one would be advised to do. Usually, the complexities of the real world routing and scheduling make it impossible to solve these models of practical size. Therefore, numerous heuristic and metaheuristic algorithms have been proposed and are usually used in practical applications. These usually are sound familiar to those that have some experience with "old" AI: genetic algorithms, ant colony systems, simulated annealing, and even neural networks have been succesfully applied to provide "good enough" solutions to these problems.


I do machine learning and stats for a living. You could describe practically all of what's called AI by the popular press in the same way. Mostly "complex algorithms that classify/predict/optimize/decide between things." The lesson is still there and is still applicable to those things.


I am extremely interested how to solve routing problem with Excel. Can you share few links or articles?


Bus routing is just a variant of the vehicle routing problem (VRP), which itself is a generalization of traveling salesman problem (TSP) but with multiple salespersons doing the traveling.

There is an Excel sheet for solving such problems, and if one wishes to go further, the basic heuristic and metaheuristic algorithms are not that hard to implement either. http://people.bath.ac.uk/ge277/index.php/vrp-spreadsheet-sol...

With these keywords you are welcome to fall into this rabbit hole as deep as you wish. I promise to greet you on my way up (currently finishing my PhD on the topic).


Thanks!


>I think that if people had understood what the algorithm was optimizing for—sleep health of high school kids, getting elementary school kids home before dark, supporting kids with special needs, lowering costs, and increasing equity overall—they would agree that the new schedule was, on the whole, better than the previous one.

I gather that the new schedule was not presented with a score card, comparing it with the old schedule, or that the score card was maybe buried in a report.

Numbers like:

- expected decrease in number of elementary students getting home after dark

- expected decrease in number of early pickup of high school students

- expected decrease in cost

- expected capacity for special needs, compared to old system

- numbers directly supporting the claim of increased equity

can be understood by everybody, and would go a long way.


Even after reading the article I'm not sure what it teaches us about AI, rather:

* if you make a fair choice, it'll appear to screw over previously privileged people.

* just because it's an optimal solution doesn't mean it's a reasonable one

* actually talk to people about their needs


There is this trick among us operations researchers I'd like to share: even if one is able to perfectly model the peculiarities of a given application and get the optimal solution, it should not be the only output. Instead, one should put forward several good alternatives which the stakeholders can then discuss and become proponents for.

Nobody likes the scruffy operations researcher who awkwardly tells decision makers what to do in the form of an optimal solution, because this takes the power away from them. However, if one presents several good, preferably nearly optimal but different to each other, solutions between which the decision can be made freely, you have a better chance to make a change to better.


I took two notes from this that I can use in my day job doing this kind of thing:

1) Don't use approaches that are black-box in nature if they will negatively impact people who have a say in its adoption. Stay simple or create tools that let users play with the parameters to understand how the consideration of factors creates a different outcomes. Expose them to how it works don't just ask for trust.

2) The narrative that takes hold around an algorithm supersedes the reality of what the algorithm does and how. Manage the soft side of the rollout just as strictly as the technical side. Just like in startups, sales is equally as important (if not more so) than having a working product.


> Stay simple or create tools that let users play with the parameters to understand how the consideration of factors creates a different outcomes.

I've found that this works only if the tools can actually be used reliably to arrive at an outcome without supervision. If there are intermediate results that have to be checked against descriptive knowledge that isn't explicitly included in the model, then people will be disappointed when they choose a configuration and learn that it can't work for some reason or other.

Such as, "Oh, the calibration standard doesn't work in that range, I didn't put it in the model because I didn't expect anybody to try it, and it wasn't in the requirements at the time."


Don't use approaches that are black-box in nature if they will negatively impact people who have a say in its adoption.

Shouldn't that "people who are going to be harmed by it." I mean, this situation makes it clear that no one likes being ridden-over by a black box. We shouldn't say "well, it's OK if they can't complain." (and the racist-AI parole system example shows this is a real risk [1]).

[1] https://www.wired.com/2017/04/courts-using-ai-sentence-crimi...


This article represents the core issue of "technology vs. luddites" in our current society, which is that technological change, no matter whether it's positive, negative, or neutral, is almost always decried by whatever group might be affected in a negative way, even if over all results are highly positive to society. The main, insoluble problem being that we take everyone who complains seriously, and genuinely don't want to disenfranchise people, but it's impossible to create change in society without at least one person being affected. And since the voice that complains is always the loudest one in the room, this can shut down whatever progress was supposed to happen.


I also think the political process just doesn't work well with the iterative process necessary to adopt new technology. Politics is so focused on immediate results and sound bites and whether it fixes a problem immediately. Implementing any new technology just doesn't happen right the first time. It's a learning process. You forgot to consider XYZ so you had to go back and add some new features. A feature might not work as expected and another might have had some unintended consequences. It happens. It gets fixed. The technology gets better. Not in spite of these challenges, but because of them.

Politics, on the other hand, is all-or-nothing. They didn't go back and reconsider the algorithm or the weights or discuss why certain factors were assigned certain weights. It was either implement the algorithm or don't. They balanced progress with political reputation and when "too many" voters voiced displeasure they abandoned ship for fear of a day or two of bad PR. That's not a recipe for success when it comes to technological change, it's a recipe for disaster.


The techlogical change here wasn’t iterative, it was an immediate Big Bang shock to the existing system.

I’m surprised the school board even proposed it after seeing it. It sounds like they completely ignored their own knowledge of political realities and tried to rely on saying “the machine told us so” would insulate them from criticism.

My mom spent a little time on this issue (bus schedules) in high school. Our district officials were extremely aware of the pushback they would get to almost any significant change in busing and school start times, no matter what direction. People don’t like their cheese moved.


The linked article at the bottom is a really great way to discuss policy and technology with a broad set of readers.

https://apps.bostonglobe.com/ideas/graphics/2018/09/equity-m...

My only complaint is that requiring someone to type something halfway through an article isn't great UX. I still think the overall goal--educating people about policy tradeoffs and the role of technology--was phenomenal for a column mainly tackling local issues.

(I wonder what would have happened if they set the algorithms' preference as an end goal, but only shifted start times towards that by five or ten minutes per year, so no one's family schedule was radically affected in any given period, but they still would have realized the benefits eventually.)


My previous comment:

https://news.ycombinator.com/item?id=18059535

> "start times is something the district had wanted to do for a long time" So, was the real motivation here one of efficiency and cost-savings then?

and the response was:

"No, as far as I know having later school start times for high schoolers was the main objective, and that was the strongest one in the solution that was chosen by Boston when they used our algorithm."

"It clearly stated that cost-savings was one of the outcomes to be balanced, but not the motivation."

https://news.ycombinator.com/item?id=18059535

However, I'm still not entirely convinced this was purely done "for the students".

even in this article it claims it "started as a cost-calculation algorithm".

So please, no, you don't get to change the motivation from cost-calculation/savings to "later times for students" by trying to re-purpose such an algorithm, even if it does benefit students in such a way (albiet with other negatives that the community felt outweighed the benefit to students).

Although, what i didn't understand was that the opposition was "15 percent the public school population" and only from wealthy families, which changes the perspective somewhat.


People are weird. We feel better about things when it appears that our input has been taken into account whether or not the end result has actually changed.

Here's an article with a bunch of examples from elevator door close buttons to fake thermostats:

https://www.nytimes.com/2016/10/28/us/placebo-buttons-elevat...

So much of the pushback seems to have come down to the framing of the "algorithm" as this emotionless black box devoid of input instead of what it really is: a model based upon inputs from all sides.


For more on the consequences of big data and algorithms on humans, check out mathematician Cathy O'Neill's book "Weapons of Math Destruction". She describes problematic systems that are opaque, difficult to contest, and scalable, thereby amplifying inherent biases to affect increasingly larger populations.

https://en.m.wikipedia.org/wiki/Weapons_of_Math_Destruction


Why can't high school students just take public buses which run all day anyway? (This is how I got to high school everyday growing up in England.)


My kids ride a public bus to high school in Wisconsin. But there aren't enough public buses to transport all of the kids, and there are many cities with no public transit at all.


Same here in Australia. My primary & high school buses were just the normal routes run by the local bus company, you'd often see 3 or 4 regular passengers on it as well.



I don't understand...if 'the algorithm'(aka people designing the algorithm) decide that sleep health is an important stat, and they optimize sleep health for 95% of children, but my child is left out or even worsened by the change...should I just accept it(on my child's behalf) for the greater good?


Sounds like the project was missing a capable project manager. Project managers are kinda unpopular among programmers (for various, valid reasons). But making sure the stakeholders are involved with the project is actually one of the duties which has undisputedly a positive effect on most projects.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: