That said, making the 19 adapter is a whole new task, and I think these tests should be converted to RTL eventually, so the approach described in the blog post is still valuable.
Did you use Chat GPT 3.5 or Chat GPT 4? GPT 4 solves this correctly when I ran it (but admittedly, by nature of the way it works non deterministically, it might have failed for you but worked for me)
My takeaway: modeling problems in minizinc correctly is exceptionally difficult for non-trivial problems. You can model it correctly, but you'll likely still need to add additional "constraints" that improve the performance of the solver to the degree where it's even remotely usable to solve real problems.
It's a really interesting tool, but one of the reasons we thought it might be useful for this problem is so that non-technical people could easily change the constraints and play with the costs for different operations. I don't think it's particularly good for that, at least in this problem domain.
Do I understand correctly that this kind of constraint satisfaction is more complex than just turning things into a bunch of SAT clauses? Otherwise (and admittedly without a deep understanding of genomics or solvers), I would be surprised if constraint satisfaction were the best approach for edit distance…
Once you added domain-specific performance-oriented constraints, did you find this to be a useful and viable approach to the problem?
MiniZinc is just a modeling language, you can throw the problem at different solvers. You can use SAT solvers (assuming you have some wrapper that translates FlatZinc to CNF), CP solvers (can be SAT-based underneath or use different algorithms like MAC), SMT solvers (SAT Modulo Theories like Arithmetic), MIP solvers (usually Simplex + Branch & Bound)...
key takeaway: Boisot and McKelvey updated this law to the "law of requisite complexity", that holds that, in order to be efficaciously adaptive, the internal complexity of a system must match the external complexity it confronts.
I have only ever used Z3, but you might be onto something. Modeling problems is really challenging. It does not help that if you search for documentation or guidance there one two types of resources: beginner Sudoku or primary literature academic papers discussing the minutia of optimization properties with so much jargon.
I really like MiniZinc, especially that one can test a lot of different type of solvers for a problem.
But one of its drawbacks is its limitation of handling input and output (including preprocessing and postprocessing). In some cases - for example when the output is rather simple - I use for example Picat/Python to transform input to MiniZinc format (.dzn format or JSON) and then run MiniZinc.
But for more fancy output I tend to use Picat or Python + (OR-tools CP/SAT or CPMpy or minizinc-python).
They do actually talk about this idea in the talk. The "shortest possible program" that can be used to describe the music is undecidable, but there is a short description, in the case you just described it's unknown to the listener.
Would it be possible to mitigate the CSS based fingerprinting using URLs, by having the client forcibly cache the fonts / urls? I think then on return to the site, there would be client cache hits, and no request to the server on return visits.
I imagine this would be a pain for browsing in general, but could help browsers in a privacy mode
In addition to making browsing slower you’d also consume more of your data, if you’re on a capped plan.
These days I have a 70 GB plan with data rollover, which leaves me with plenty of data to spare. But for the longest time I used to be on a plan with only a couple of GB of data per month, and it was a real pain in general. In that situation, downloading all resources instead of only the ones I need would have made a noticeable impact I am sure.
Even though I now have data to spare, the additional slowness that you mentioned would be annoying enough that I would not want my device to do that. Additionally, transferring more data would also consume more battery.
Congrats on the launch! This looks like a great project and I'm sure you learnt a lot from building it.
Some first impressions:
1. You used Pinterest as an example link in the guide, and it wasn't immediately clear how this was differentiated from Pinterest. If you could think about why it'd be better to use SeeLink rather than an existing tool, I'd be more interested in it
2. I think the red block colour you're using is too much (#FF0004). I'd consider changing it to a lighter red, or even white background with black text (it'd be easier to read)
3. It wasn't clear to me what the bookmark icon does – haven't I already bookmarked the link by adding it to my board? Is this feature for further filtering?
Overall, great job! It looks quite polished and I didn't see any bugs.
Thanks a lot for the encouraging comments! I have addressed some of your points below.
1. It looks like it wasn't a good idea to use Pinterest as an example link, haha. I think Pinterest is mostly a social network for discovering interesting links from the web, this is more of a tool that people would use to share important links with each other, instead of messaging apps (which is how most people I know share links with each other).
2. Hmmm, I've got mixed feedback on that, but I think the best thing to do would be to change it to white with maybe black or red text in the foreground.
3. You're right, the bookmark icon is for further filtering, something you'd use if you're browsing someone else's board and like a particular link you'd want to read later.
If you don't mind answering a question of my own, do you think you'd actually find a use for this and actively use it?
Makes sense! I think I probably won't use it much yet, as the example you mentioned (messaging apps for link sharing) works pretty well, and often they also have a view where you can just look at all of the links sent. Some ideas of what would make it more likely for me to use:
* The ability to integrate with those messaging apps so I could use SeeLink, but then send out those links to friends via different messaging apps. I often send out the same links to multiple people on different platforms, it'd be neat to be able to track who I sent it to and catalogue them that way
* A feature like related links, or some form of automatic categorization of links based on tags.
I think Pinterest is great for discovery, so if this is more about link categorization and social sharing, you could take it in either of those directions
Noted! All of the points you mentioned also seem really interesting, I think this might make it more seamless for users.
Yup, I think Pinterest is great for discovering interesting links, I think the direction SeeLink should go in is the social sharing and work related sharing of links.
> 11. Cooking pollutes the air. Opening windows for a few minutes after cooking can dramatically improve air quality.
After bad bush fires where I live I invested in both a CO2 monitor and a particle air quality monitor (Dylos DC1100 Pro Air Quality Monitor). It shocked me just how much cooking (and other factors I can't quite determine yet) affected the air quality. Think 10 times more particles due to cooking, even with the fan on and windows open.
Another possible source of indoor air pollution are carpets and other manufactured things, interestingly.
CO2 was bad too, but opening my window has a greater impact on that. But it is shocking to see levels of ~800-900ppm inside with windows closed.
Often node modules need to depend on git to do developer experience things. Having it be available in node with no dependencies is really nice, you don't need to assume anything about the git in the environment your user is using.
It covers some pros and cos, the main advantages of using an app shell are
> whether your web app is best modeled as a single-page application (advantage: App Shell); and on whether you need a model that's currently supported across multiple browsers' stable releases (advantage: App Shell)
I think it's reasonable to argue that lots of sites that are SPA's don't need to be, if they can do this instead.
By this argument, JSX shouldn't be published to NPM either.
You might argue that you know what you're consuming if you're importing, say, a React component, and it's likely written in JSX, but it's really a categorization issue.
A standard field in the package.json indicating what the source is written in might be useful, but would probably be left unpopulated. Personally, I find it fairly straightforward to tell what a package is written in just by looking at the source.
That said, making the 19 adapter is a whole new task, and I think these tests should be converted to RTL eventually, so the approach described in the blog post is still valuable.