I have always wondered how archives manage to capture screenshots of paywalled pages like the New York Times or the Wall Street Journal. Do they have agreements with publishers, do their crawlers have special privileges to bypass detection, or do they use technology so advanced that companies cannot detect them?
Big difference is that Anthropic blocks competitors from using its products (they literally cut direct api access. Or even through 3rd party like Cursor).
Isn't the whole issue here that because the agent trusted Anthrophic IP's/URL's it was able to upload data to Claude, just to a different user's storage?
I'm curious how tools like Claude Code or Cursor edit code. Do they regenerate the full file and diff it, or do they just output a diff and apply that directly? The latter feels more efficient, but harder to implement.
Most things don't work. You can be an arm chair critic and scoff and you may be right a lot of times. But you'll also never really build anything of note and/or have crazy hockey stick growth in your life.
reply