I'm a user and have been negatively impacted by the feed fetching optimizations - daily feeds are often a few days behind and come in bunches. Two examples:
- Penny Arcade updates its comics Monday, Wednesday, and Friday, always at 7:01AM UTC, and then news other times during the week. It's Wednesday at 4:25PM UTC - 9 hours after - and goread hasn't picked it up.
- Dinosaur Comics is updated weekdays. I'll eventually get all of them, but usually two or three at a time. For example, yesterday I marked all my feeds as read; today, I have entries from Monday and Tuesday, but not from Wednesday.
I had hoped that the move to the everyone-pays model would give you the resources (either developer or quota) to fix these issues, but they've gotten no better or maybe worse.
I haven't looked at what you're doing, but I believe Google Reader used pubsubhubbub where available to reduce/eliminate polling for many popular feeds.
I honestly didn't have a great experience with my last bug report, so I haven't tried again.
At Theneeds we use a "sliding windows" approach to deal with polling. Say you run the scraper every hour. Each feed F_i is scraped once every n_i polls. If the feed returns more than his average news, then n_i gets decreased, while if the feed returns no news, it gets increased.
Perhaps with a similar trick you can run your scraper more frequently on some feeds, still keeping the cost under control.
Thanks! I don't really know when the website has a problem, when goread has a problem, and when I have a problem, so I end up assuming that it's goread problem. I like how the desktop web version gives a "last refreshed" and "next refreshed" indication.
I'm with you. I tried tmux but didn't find any benefit and several serious drawbacks.
Many of the purported benefits - better configuration language, more maintainable code - don't matter to me:
- Screen is done and isn't going to change. Debian has had the vertical splits patch for so long I was surprised it was a patch.
- My configuration of screen is not going to change. It's five lines that I copied and pasted 10 years ago. There's just not that much that needs to be configured.
The "Multiple Clients Sharing One Session" thing is important to me and screen gets it right by default. It's useful to have a few fixed xterms on my screen and a bunch of multiplexed terminals in screen. I use one screen session per task.
I think "screen contents persisted through full-screen programs" is screen's "altscreen on" setting. Or at least that's my note from the person whose screenrc I stole.
I think he's asking what happens if you have two panes side-by-side and try to select text in one to copy. Does it also select text from the other one?
If you use a mouse yeah, but as noted by others, with tmux 1.8+ just prefix-z and you can temporarily zoom that one pane. So there really isn't much reason to complain about this particularly.
Well, this is not really the dilemma though. If you don't have vertical splits you don't have this problem, but you also don't have vertical splits. If you have vertical splits you'll solve this problem somehow.
They're talking about using the host terminal's text-selection mode and the system clipboard, rather than tmux's built-in text-selection mode, which does.
I too generally use tmux's built-in text-selection and clipboard when I'm copying from pane to pane, because it's more convenient.
The thing is you can synchronize tmux selection with your clipboard, there isn't really a need for tmux to "support" your terminal's view of tmux running within.
The answer is, it depends. If you copy it with the mouse, you will be copying both sides. If you copy it with your tmux keyboard copy bindings, it will copy just the one side. I don't know the default bindings because I made mine prefix-esc to get to copy mode, vi keys, v to select, y to yank.
Although there's some real gems in there (I can't believe I never ran across vipe before!), its Debian package conflicts with the GNU parallel package: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=718816 Quite annoying.
For one thing, it allows your pipeline to both read from and write to the same file, as it defers the output until there is no more data to read from "upstream". For instance, this won't work (it truncates the file for writing before it is ever read):
A 2007 Yahoo investigation found 40-60% of Yahoo's users came to the site with an empty cache, accounting for 20% of page views. Empty cache experience is important.
It depends on how much the browser caches have grown to compensate for page bloat. Ie8 and below had a limit of 50 mb for all sites, ie9 upped it to 250 mb.
"The guarantees needed to avoid leaving the server in a bad state when handling panics would be impossible without the defer mechanism Go provides."
I'm only passingly familiar with defer, but I understand it to be equivalent to RAII in C++, Python's with statement, Common Lisp's unwind-protect, and others - does is actually provide something more, and if so, what?
Go's "defer" is not equivalent to RAII. It is function-scoped rather than block-scoped and has semantics based on mutating hidden per-function mutable state at runtime. For example:
func Foo() {
for i := 0; i < 5; i++ {
if Something() {
defer Whatever()
}
}
// ... the compiler can't tell how many
// Whatever()s run here ...
}
Compared to RAII as implemented in for example D with its "scope" statement, "defer" has much more complex semantics, inhibits refactoring since moving things to function bodies or inlining function bodies silently changes semantics, and cannot be optimized as easily, because of the dynamic aspects. IMHO, it has essentially no advantages over RAII and many disadvantages.
> defer inhibits refactoring since moving things to function bodies or inlining function bodies silently changes semantics, and cannot be optimized as easily, because of the dynamic aspects.
As someone who has written and reviewed hundreds of thousands of lines of Go code, I haven't observed this to be the case in practice.
RAII doesn't fit into Go, philosophically, as it lets you trigger hidden functionality on the creation or destruction of data structures, whereas a deferred function can only be run if there's a defer statement there in the code (where you can see it).
In Go, the only way to execute a block of code is to make a function call. There are no constructors, destructors, or any other kind of side effect to allocating or deallocating data structures. This brings a huge benefit in terms of readability and transparency.
Anyway, I'm not sure why we're comparing defer and RAII, because they're generally used for different purposes.
> As someone who has written and reviewed hundreds of thousands of lines of Go code, I haven't observed this to be the case in practice.
Sure, a lot of suboptimal design decisions don't cause problems in practice. That doesn't change the fact that they're suboptimal, and in this case lead to worse performance.
> RAII doesn't fit into Go, philosophically, as it lets you trigger hidden functionality on the creation or destruction of data structures, whereas a deferred function can only be run if there's a defer statement there in the code (where you can see it).
I'm focusing more on RAII as implemented with "scope" in D; whether stuff runs explicitly or implicitly is an orthogonal design choice (although I prefer implicitly running code since you need finalizers anyway in any GC'd language, including Go—so you might as well embrace it). With the "scope" statement you also always have to explicitly call the destructor, but in a lexically scoped way.
The main thing I find suboptimal with "defer" is the choice of dynamic mutable state as compared to lexical scoping.
> In Go, the only way to execute a block of code is to make a function call. There are no constructors, destructors, or any other kind of side effect to allocating or deallocating data structures. This brings a huge benefit in terms of readability and transparency.
This appears to be a helper function used exclusively by the standard library to handle file descriptor closing (incidentally, the one issue I've had with Golang's concurrency model).
> This appears to be a helper function used exclusively by the standard library to handle file descriptor closing (incidentally, the one issue I've had with Golang's concurrency model).
But it's part of the public API. You can add a finalizer to any object. The semantics of Go say that finalizers are run automatically when the GC reclaims an object. So this statement is wrong: "In Go, the only way to execute a block of code is to make a function call. There are no constructors, destructors, or any other kind of side effect to allocating or deallocating data structures." It would be more correct to say "idiomatically, in Go people tend to prefer calling functions explicitly, and 'defer' encourages this."
I think the fact that it's used by the standard library to close file descriptors is actually really illustrative: you need finalizers in a GC'd language, otherwise you'll leak resources. Not all resources are stack-scoped. So implicitly running functions on deallocation is a necessary evil. You might as well embrace it in your language design.
It may be part of the "public API" solely because it needs to be made available to several different components of the standard library, which is itself at pains to implement itself primarily in Golang.
I don't see why it's relevant that the standard library as opposed to user code needs it. The standard library is a library like any other. It needs finalization functionality because you always need that functionality in a GC'd language.
File descriptors are just one case of resources that need finalization functionality to not leak: the same applies to GPU textures, X server resources, etc. etc.
You don't need finalization functionality in a GC'd language. If you imagine a language that lacks finalization functionality but has automatic memory reclamation, things turn out okay. Finalization functionality isn't something that sane programs depend on -- garbage collection of memory makes sense because if you run out of memory or allocate and remove pointers a lot of stuff, the garbage collector can naturally kick in and find you some more memory to use. If you allocate a bunch of file handles, does the garbage collector kick in when your OS tells you that you've run out of file descriptors?
> You don't need finalization functionality in a GC'd language. If you imagine a language that lacks finalization functionality but has automatic memory reclamation, things turn out okay.
Not in fault-tolerant message passing systems, to name just one obvious example. Suppose that you put a bunch of file objects into a buffered channel, and then the goroutine that was supposed to receive those objects panics. Your program wants to recover from panics with recover(). Who closes the file descriptors in those channels? Nobody owns them yet: they were in a channel and the goroutine that was supposed to receive them died.
You might be able to solve this by handing out references to the channel to another goroutine that is supposed to clean up the file descriptors, but this gets really complicated. This sort of thing is why Go is GC'd in the first place. It's much easier to just have the GC clean up the file descriptors in channels in which one endpoint has gone dead, and that's the sort of thing finalizers are for and I assume it's the reason that finalizers were built into Go.
one option is for the goroutine to defer a cleanup closure that closes every file in that channel. Panics will cause all deferreds to be called all the way up to the deferred which recover()s.
Another option is to crash the app instead of excessive recover()ing. (obviously there are good reasons to use runtime error handling, but imho fewer than what one would think).
Finalization is not needed but it is nice to have. For example releasing shared IPC resources.
> Finalization functionality isn't something that sane programs depend on
One definitely wouldn't want to use it _if they don't have to_, but sometimes there is no choice. Finalization is one of those things one does reluctantly because they end up having to use shared memory for example. Or standard library want to help you not leak file descriptors.
> Garbage collection of memory makes sense because if you run out of memory or allocate and remove pointers a lot of stuff, the garbage collector can naturally kick in and find you some more memory to use.
Well collection of anything unused and limited probably makes sense because otherwise you'd run out of them eventually. Garbage collection doesn't magically add RAM sticks to the machine though. If you've used all the memory and still hold references to all the objects, (in a GCed language), there is nothing GC can do. [Well I guess you can have weak refs like in Python...].
> If you allocate a bunch of file handles, does the garbage collector kick in when your OS tells you that you've run out of file descriptors?
Well in a high level language that has GC you'd expect to also probably deal with File _objects_ not file _descriptors_. In that case I would expect those _File_ objects when and if they are GC-ed to also close their file descriptor appropriately. That is where being able to have finalizers helps. Because a finalizer of a File object would close the file descriptor.
I only saw this implemented in OS based on Native Oberon, where everything on the OS was GC enabled, from file handles to GUI widgets.
If your applications needs to communicate with the outside world in a OS implemented in a systems language without GC support, then the GC needs a little help to give back those resources back to the OS.
It's not at all equivalent to RAII, or really any of those other examples; it's a way of clearing up the control flow of a function that needs cleanup work before it returns, but it is a fussy and error-prone way of expressing scoped resource access.
"Defer" is nice to have, and because it does less than scoped acquisition, it's easier to repurpose for other jobs; that's kind of thematic of Golang --- simple, orthogonal advances over C/Java/C++; a distinct lack of "theoretical" ambition.
Since the Bitcoin blockchain is public, couldn't you follow the money? Make a list of all wallets that accepted these funds initially, and then do graph analysis, either to see where the money went or provide others with a tool to avoid transactions with those wallets?
Yes, but this is somewhat like saying you could mark the banknotes used to pay off a person that's blackmailing you. If you catch someone with a marked note that doesn't prove they are the perpetrator; it just means that they received your money somehow.
Problem is that doesn't really help you identify the perpetrators. Both mixing services, and the fact that a user can generate unlimited wallets (if someone sends money to a wallet, you can't prove they own the second wallet or if they transferred money to someone else) makes this very difficult.
Uninteresting: "Use my blog platform! It has the power of node!"
Interesting: "I wrote something new in a way or with tools that I had not used before. Here's what worked, here's what didn't, and here's what I learned."
Unfortunately, this is the former. I'd be interested in reading the latter if you wrote something up about your experience with this.
"Deploy one-off utility types for simpler code" can be called a monad or Optional. I wonder if the language developers will add more formal support for that; it looks impossible to add Optional as a library due to lack of user-configurable generics.
If it had been done at the very beginning, I think it would have been wonderful...but given that legal, idiomatic nil is already out in the wild I think it's too late. Such a shame - in the example of error handling, returning an Option doesn't appreciably increase verbosity:
_, err := potentiallyErroringOperation()
// with idiomatic nil
if err != nil {...}
// with monadic Option
if err.isDefined() {....}
Go occupies an interesting space. In my mind, I see it as competing simultaneously with C and Python. I suppose that the developers didn't see a place for an Optional type within that realm. I have to admit, I think it's a shame; huge proponent of non-nullability here.
Especially given that with Go's lightweight lambda syntax a functor type would be easy to work with, I'm disappointed with its exclusion.
> Go occupies an interesting space. In my mind, I see it as competing simultaneously with C and Python. I suppose that the developers didn't see a place for an Optional type within that realm.
I would assume that everything that hasn't been implemented in Go 1.1 is not implemented because the devleopers of Go "didn't see a place for" it.
Now, either not seeing it as more important than the theings that did make it in, or seeing it as trickier to implement and acceptable to do without in 1.x, that's quite valid.
You make a valid point, but for something as fundamental as nullability, I think that's baked into the core language spec. It's possible that we could see an Option type in the future, but the fact that it's not an integral part of the language now means it would be unreliable and defeats the purpose of eliminating NPEs.
> You make a valid point, but for something as fundamental as nullability, I think that's baked into the core language spec.
Well, its certainly out for 1.x; I wouldn't presume to assume how much or little flexibility there will be for 2.x if/when it happens.
> It's possible that we could see an Option type in the future, but the fact that it's not an integral part of the language now means it would be unreliable and defeats the purpose of eliminating NPEs.
Assuming that its not part of a breaking change, sure; but the no-breaking-changes pledge only applies to 1.x. If there is a 2.x, it will be because a need is seen for breaking changes.
I think that beyond a handful of core features, keeping Go 1.x small was a key goal, and getting real production usage experience with the small 1.x to decide on future directions.
I'm a user and have been negatively impacted by the feed fetching optimizations - daily feeds are often a few days behind and come in bunches. Two examples:
- Penny Arcade updates its comics Monday, Wednesday, and Friday, always at 7:01AM UTC, and then news other times during the week. It's Wednesday at 4:25PM UTC - 9 hours after - and goread hasn't picked it up.
- Dinosaur Comics is updated weekdays. I'll eventually get all of them, but usually two or three at a time. For example, yesterday I marked all my feeds as read; today, I have entries from Monday and Tuesday, but not from Wednesday.
I had hoped that the move to the everyone-pays model would give you the resources (either developer or quota) to fix these issues, but they've gotten no better or maybe worse.
I haven't looked at what you're doing, but I believe Google Reader used pubsubhubbub where available to reduce/eliminate polling for many popular feeds.
I honestly didn't have a great experience with my last bug report, so I haven't tried again.