This was inevitable, better now than later when the damage is less widespread. Now clawdbot (or whatever they decide to call themselves) will have to respond with better security safety nets. Individually will always naively download whatever is on the internet. Platforms needs to safeguard against that.
Remember the early days of Windows? yea it's gonna happen again with AI.
auto-generated GraphQL clients works really well if you have a larger codebase and large team. You can think of it as a forced high quality documentation.
If you are working in a small project with just a few engs. The additional lift for GraphQL might not be worth it. Especially if the team is not already well versed.
I agree with you there, having GraphQL force adherence to a schema is definitely a good thing for a larger team, but smaller teams will probably get bogged down without previous experience
Months after this consolidation, Facebook decided to create a competing community using The Linux Foundation®. As a first action, Facebook applied for a trademark on Presto®. This was a surprising, norm-breaking move because up until that point, the Presto® name had been used without constraints by commercial and non-commercial products for over 6 years. In September of 2019, Facebook established the Presto Foundation at The Linux Foundation®, and immediately began working to enforce this new trademark. We spent the better part of the last year trying to agree to terms with Facebook and The Linux Foundation that would not negatively impact the community, but unfortunately we were unable to do so. The end result is that we must now change the name in a short period of time, with little ability to minimize user disruption.
(IMHO, PrestoSQL also looked maybe a little too much like PostgreSQL in this space.)
I'm imagining an unusually efficient brandstorming session. "OK, folks, idea hats on, there are no bad ideas... we've got Presto..." "Uh... new Presto... New-o..." "Neutrino..." "Trino?" "Trino!" "Searching it now!"
well, these robots will be operating in highly competitive markets offering 2-3 day shipping so Bangladeshi packers will not be in those labor markets.
With raising minimal wage in the US. We are likely looking at $10/hr in various markets. These robots run 24/7 so that is 1024365 = $87k saved per year. Even with depreciation counted in, it sounds pretty economical to me.
I have no doubt that a certain portion of workers benefitted from the new policy while some portion of workers did not. It is too early for anyone to judge the policy without the numbers being published from Amazon. My issue with the article is that it presents a single story of a person who was hurt by the policy and extrapolate that to be the overall tone at Amazon. This article relies on sensational and emotional aspects of one story, rather than facts and statistics.
Obviously, Amazon made this decision as a business and not a charity, so it stands to reason that this benefits the business more than employees.
However... as a full time employee who has his two shares of AMZN, I honestly would rather have had the extra income over the last two years. Maybe I could have put a significant dent into my student loans, or could have actually afforded my own place. As it was, too much of the extra "income" provided by benefits required gambles that low income people really should be required to depend on to make ends meet. Productivity bonuses aren't guaranteed, overtime isn't guaranteed, remaining employed for two years isn't guaranteed.
I see it as the ceiling having come down a bit, but the floor also having come up. And let's be honest... the second it becomes feasible to fully automate picking, stowing and counting, those employees are getting fired on the spot. There is no long term career prospect for the vast majority of Amazon warehouse workers, and they should consider the value of stability of income in the face of an uncertain future.
How do you defend the statement "Each sensor creates a sphere of intelligence and the more data they collect, the smarter they get."
Do you mean each device gets smarter individually because the specific device learned more about the specific space? Or that there is some kind of supervised learning component where you would adjust the algorithm/model over time for every device.
At a local-level, each sensor builds a background model, which we diff against & combine w/ inference outputs for detections (background modeling helps reduce our false-positive rate). At a global level, we continuously push new pre-trained models over-the-air. These are built using 3rd party data sources (so not sourced from the sensors themselves).
Being the president-elect has its perks. One of which is the ability to summon tech industry's biggest CEOs & Execs. If anyone knows what they were chatting about, please do share!
Remember the early days of Windows? yea it's gonna happen again with AI.
reply