C# has LINQ. Implement Select and SelectMany extension methods for whatever you like, and you can use the LINQ syntax with your type just as easily as with IEnumerable. Foreach and async/await are baked in, that's true, but the LINQ syntax is easily extendable to new use cases.
What we have done, and what the mathematicians call Monads, we have identified these sets of operations, we call them standard query operators; we have a list of about 25 standard operators that you can apply to any data model.
I don't know if this works well on JS, but there is a purely functional way that avoids most of the performance hit - generator methods. In C#, I'd write functions that return IEnumerable<T>, using yield return, and others that combine them with LINQ's Concat, and aside from many iterator objects, only one big array would need to be allocated.
When it happens that parts of this work later turn out not to be needed, it's easy to discard an enumerable, and the iteration is not actually performed. It can sometimes lead to really ellegant code that also performs well.
I'm currently working on a side project, a distributed in-process key/value store for .NET Standard, which might be interesting in this context. I hope you'll pardon the shameless plug, but I'm interested in getting feedback. Particularly, what to focus on, what use cases, missing features... And of course, if you see any issues with the approach.
It needs more work, but I think the eventually consistent part could already be useful for a simple distributed cache layer atop a classic relational DB.
This is what I'd do too, and it is quite strange that it was dismissed like that. Maybe because it's not as much fun :)
BTW I believe it may be a bit more than just O(n) time on the whole. If your hash-table is auto-growing, you'll have to pay for its resizing. And OTOH if it's sized up front, then you'll have to allocate something proportional to the size of the full array, not just to the number of distinct elements.
I think I get it. With doubling, the sum of all the work done for resizing is roughly (written chonologically backwards): n/2 + n/4 + ... ~= n, and O(n) + O(n) is still O(n). Thanks!
Yep! This is generally referred to as "amortized linear time". It's the difference between considering the cost "per operation" vs. "per algorithm". The former is technically correct (as an upper bound), but too pessimistic when you consider the algorithm as a whole.
This is a weird title. The article says that the people on basic income were equally likely to find a job as those who were not receiving it. I was expecting that basic income might make people less motivated to find jobs, and that would have been an issue. But this result seems like a positive outcome for UBI.
It went to 2000 people already getting unemployment benefits.
> It was run by the Social Insurance Institution (Kela), a Finnish government agency, and involved 2,000 randomly-selected people on unemployment benefits.
So I'm curious what affect that had vs the general population's approach to finding employment. Particularly young people who may have never had a job before.
Scott Hanselman is a workaholic. He is also very evasive at answering questions directly if it isn't politically expedient. I have noticed this both in a few email correspondences I've personally had with him and through his comments on his website. I quite like how he explains things and I do think he is a good dude.
Hanselman is also one of those developers that can bang code out 24/7. I've met them. They can get a lot of stuff done, however there is usually someone like me that is picking up the pieces left behind.
Recently I worked with one and he banged out an MVC project, commented out all the tests I wrote after he changed the API I wrote and then went on holiday to Greece for several weeks and then was moved onto another project. Guess who had to clean up the mess ...
TBH I worked in high stress environments (gambling and finance). Management didn't care that I would work to 7-8pm most evenings (9pm start) and do a few all nighters. A few times I was like 30 minutes late twice in a row I would get called in for a talk about "tardiness". I was younger at the time and I thought they would cut me a bit of slack after delivering the impossible on time!!
After a while you realise that it isn't worth the effort and just get a contract job where you earn twice as much and have none of the bullshit.
Did you check if the "nouveau" OSS Nvidia driver was still installed? Happened to me all the time, so I got used to always checking and removing it manually before letting him reboot.
Why always Peano numbers? If it's doable, I'd love to see something like a web server with provable safety guarantees (e.g. eliminating any SQL injection risk), or a concurrent system with the compiler proving there are no races, even though the code is sharing memory. Or just proving that some simple business rules hold. Is stuff like that doable in a dependently typed language?
We start with Paeno numbers because they are probably the simplest inductive data type that set the scene for how we tend to proves things in dependently typed programming (inductive reasoning). Proving the things you want are probably doable (I can't say for certain, they are vaguely specified which does not fit with formal methods), but significantly more involved. Trying to take the reader from nothing to that in a single blog post or talk is beyond anyone's ability. Generally, this is because dependent types aren't quite ready for mainstream in terms of convenience.
That said, I'd read some of Edwin Brady's research, as he is actively trying to find the intersection of "business programming" and dependent types: https://edwinb.wordpress.com/publications/
Yes, it's possible but might require a lot of effort.
For example, my thesis was about creating a system where actors send messages to each other and the system guaranteed that you could not send messages with an unexpected type to another actor. Extending that, you could create a system where messaging has to follow an exact protocol, ensuring that there can be no deadlocks.
You can read my work here: https://www.dropbox.com/s/lczrcqu2m9p6osv/Master_Thesis.pdf?... and the code is here: https://github.com/Zalastax/singly-typed-actors
It's not Agda, but you may be interested in this article by Foursquare about how they use Scala Phantom Types to ensure MongoDB queries are semantically well formed at compile time:
You might like this talk [1] about building a better regex library using dependent type style features. It is really great! The regex is parsed at compile time and then can return better typed information about the capture groups to the lib user. There is also this interview I did with the author on it [2]