TL;DR Article claims AppCloud (software in question) has ties to ironSource, an Israeli-founded company now owned by US-based Unity, but never clarifies what those ties are. The author only states that an ironSource tech called "Aura" appears to do something similar to AppCloud. However, the author also points out that AppCloud isn't listed anywhere on ironSource's website. They also acknowledge that there's no evidence currently that AppCloud is doing anything weird. This looks an awful lot like an "Israel bad" article.
I feels like it's a bit hard to take much from this without running this trial many times for each model. Then it would be possible to see if there are consistent themes among each model's solutions. Otherwise, it feels like the specific style of each result could be somewhat random. I didn't see any mention of running multiple trials for each model.
Oddly enough, I've found models are actually quite consistent in their drawings of pelicans riding bicycles.
I remember I even had one case where there was a stealth model running in preview via Open Router and I asked it for an SVG of a pelican riding a bicycle and correctly guessed the model vendor based on the response!
If I order a package from a company selling a good, am I inviting all that company's competitors to show up at my doorstep to try and outbid the delivery person from the original company when they arrive, and maybe they all show up at the same time and cause my porch to collapse? No, because my front porch is a limited resource for which I paid for an intended purpose. Is it illegal for those other people to show up? Maybe not by the letter of the law.
I mean, it costs money to host content. If you are hosting content for bots fine, but if the money you're paying to host it is meant to benefit human users (the reason for robots.txt) then yeah, you ought to ask permission. Content might also be copyrighted. Honestly, I don't even know why I'm bothering to mention these things because it just feels obvious. LLM scrapers obviously want as much data as they can get, whether or not they act like assholes (ignoring robots.txt) or criminals (ignoring copyright) to get it.
Not surprising. Look and see what glorious examples of virtue we have among those at the top of today's world. I could get by with a little inspiration from that front, but there's none to be found. A rare few of us can persevere by sheer force of will, but most just find the status quo pretty depressing.
I've seen people trying to blame Kramnik for this, with his cheating accusations and all. I think Kramnik is a jerk like everyone else, but it still seems crazy to point the finger at him. We could find out tomorrow that Naroditsky died of a random stroke. We just have no idea at this point.
> But few if any humans on earth can demonstrate the breadth and depth of competence that a SOTA model possesses.
Most humans can count the occurrence of letters in a word. The word competence here is doing quite a bit of work. I think most people understand competence to mean more than just encyclopedic knowledge, with very limited reasoning capability.
> AGI never meant human level intelligence until the LLM age. It just meant that a system could generalize one domain from knowledge gained in other domains without supervision or programming.
I think it's probably correct to say that many people who seriously studied the problem had a larger notion of AGI than the layperson who only ever talked about the Turing test in the most basic terms. Also, I don't think LLMs have even convincingly demonstrated a great ability to generalize.
They're basically really great natural language search engines but for the fact that they give incorrect but plausible answers about 5-10% of the time.
>> they give incorrect but plausible answers about 5-10% of the time.
This describes most of the human population as well. Why do we expect machines to be more accurate and perfect in their correctness than humans before we say they are at parity, when they are clearly savant as much as idiot. It’s a strange bias.
Dependency management has always felt complicated. However, environment management I think is actually way simpler than people realize. Python basically just walks up directories trying to find its packages dir. A python "env" is just a copy of the python binary in its own directory. That's pretty much it. Basically all difficulties I've ever had with Python environments have been straightened out by going back to that basic understanding. I feel like the narrative about virtualenvs has always seemed scary but the reality really isn't.
You said nothing wrong. Some people just feel embarrassed about being responsible for the current situation by voting for Trump. And they react to that embarrassment by trying to shift blame.
reply