Hacker Newsnew | past | comments | ask | show | jobs | submit | plasma's commentslogin

Yes, moved to GitHub Codespaces and generally has been good.

Pros: one click setup for devs jumping between projects after you get the devcontainer setup process working, takes some fiddling, trial and error.

Has felt good for some older projects to be wrapped in the devcontainer and once it’s working feel comfortable the environment is stable, and also moving everyone to new environments has been easy.

Keeping a haywire dev/npm script away from your main machine is also good, but I know it’s not foolproof.

Cons: Codespaces CPUs are usual cloud slow so you need to pay more and single threaded perf won’t be as good as your laptop, a real shame. I think GitHub competitors would have better CPUs.

Very rarely but Codespaces can have a technical issue and you can't do your work (inaccessible), and to avoid it sleeping during the day due to inactivity you may leave it running most of the day but it demands a shutdown after 12 hours or so, so very long dev sessions can be interrupted.

GitHub also dropped support for using JetBrains IDEs which was not cool, so it’s just vscode which is usable but would have preferred other IDEs.

If Codespaces team is reading would love to see some improvements here.


GitLab's write-up mentions a dead man's switch where "The malware continuously monitors its access to GitHub (for exfiltration) and npm (for propagation). If an infected system loses access to both channels simultaneously, it triggers immediate data destruction on the compromised machine. "

https://about.gitlab.com/blog/gitlab-discovers-widespread-np...


Neat! How do you handle state changes during tests, for example, in a todo app the agents are (likely) working on the same account in parallel or even as a subsequent run, some test data has been left behind or now data is not perhaps setup for a test run.

I’m curious if you’d also move into API testing too using the same discovery/attempt approach.


This is one of our biggest challenges, you're spot on! What we're working on taking this includes a memory layer that agents have access to - thus state changes become part of their knowledge and accounted for while conducting a test.

They're also smart enough to not be frazzled by things having changed, they still have their objectives and will work to understand whether the functionality is there or not. Beauty of non-determinism!


I gave the demo a try and was able to run a search that showed "51 results" - great start! A few things I noticed though:

On the Data tab it says "no schema defined yet."

The Schema tab doesn’t seem to have a way to create a schema.

Most of the other tabs (except for Sources) looked blank.

I did see the chat on the right and the "51 items" counter at the top, but I couldn’t find any obvious way to view the results in a grid or table.


Could you share the session url via the feedback form if you still have access to it?

That's really strange, it sounds like Webhound for some reason deleted the schema after extraction ended, so although your data should still be tied to the session it just isn't being displayed. Definitely not the expected behavior.


Note you need to also raise the configuration limit of max upload to 200mb after the plan change in settings.


Yep, tried that (ridiculous it doesn’t auto update to this though)


Some of the slowdown will come from not indexing the FK columns themselves, as they need to be searched during updates / deletes to check the constraints.


Project looks interesting, would welcome seeing an API (or c# client) to be able to use it.


Unfortunately it seems the underlying search API is throwing '{ "message": "Not Ready or Lagging"}' for every search


Just woke up (in Madrid currently) and seeing all the errors. Working on getting the product back live.


We're back live now. Had to set up a new search server and quickly importing 100k products at a time. The results should be better by the minute now (as the data set increases).


Neat, I noticed it gave me wrong data though, and when I asked for the top 3 rows it provided the wrong value, due to not using UTF8 - Asking ChatGPT to use utf8 support fixed it, perhaps update it's prompt.

Check out https://chat.openai.com/share/34091576-036a-4e82-b4e3-a8798d...


You can download Google Maps offline while on WiFi and later access maps without Internet (or to avoid roaming charges).


OsmAnd is much better for this. The maps are far more detailed, especially for hiking and cycling


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: