Hacker News new | past | comments | ask | show | jobs | submit | hn3333's comments login

off topic: this anti facebook stance by some people here on HN is getting ridiculous.. sad this is a comment here that apparently gets upvotes. I would say that if you do not care about Kolibri os just don't comment..


It's just not a great fit for an open source project IMO. They go to all this trouble to build something noncommercial without tracking, and then start requiring a commercial tracking platform to collaborate?

In fact one of the reasons I use HN so much is because it's not doing any of that. And because I can choose what I read (rather than Facebook's algorithms deciding what appears on my timeline). I'm sure many people come here for that reason. This'll be a reason for the many anti-facebook sentiments. Because those sentiments are one of the reasons to come here :)


It would be very bad if OS memory usage did not return to flat no?


It sometimes feel like it is a bonus today, if apps bother to clean up their memory at runtime, so maybe that's why parent poster thought it is a good and special thing, that the OS free's the memory of an ended processes.

Btw. many people don't seem to know, that also in languages with a garbage collector like Javascript, you can create awesome memory leaks. And I would bet, most websites actually do: it only works, like it works, because websites are closed regulary. And because RAM is every increasing. But browse with a older smartphone and you hit the RAM limit very quickly.


I have 32GB of RAM and it sits unused most of the time. Right now I am at 4GB/32GB. It simply isn't a significant source of memory consumption. Open Atom and you can easily get to 500MB for a single application, which is completely wasteful. That browser can run dozens of apps in 4GB.


On the other hand, my browser (Firefox) keeps overflow through 8GB ram few times a day. Sometimes I wish people programmed like we had 512megs in a luxury machines.


I also have 32 GB RAM and right now am at 25 GB + 2.4 GB in swap. I'm at around 20 GB most of the time but always have at least 3 Firefox tabs open. Sometimes a buggy process (looking at you, Apple…) decides to go haywire and use 30-60 GB of virtual memory. I don't even notice that until I have a look into the activity monitor. Handling RAM spikes seems to be no issue at least on macOS.


I have 32GB of RAM and usually browsers use about 10GB of it. But I do open a stupid amount of tabs.


Chrome runs a separate process per origin so that adds up almost as fast as Atom


If you close the app the OS should clean up the memory no matter what the app was doing


But closing tab != closing app.


In ff and chrome tabs are processes. So aside from resources allocated on behalf of that process by other processes not being cleaned up, the OS will cleanup all the memory that tab told the OS to allocated when closed.


Firefox user here, plenty of tabs, Win10 as OS. 3.7 GB before opening the GIF in new tab, 3.8 after opening, 3.7 after closing. Reopening and closing it several times in a row yields the same results, consistently. At least for my setup (Win10 heavily crippled to my own liking) closing tab == closing app in terms of memory gained back.


It should be equal when we are talking about webapps.


It would be fair for a browser to assume that if you’ve just visited one page that you might return soon, and so keep assets in cache for a little while.


Well yes, if done right some browsercaching is fine.

But that could pile up quickly, so I am not sure if and how it is done in the various browsers.


I reckon we're probably a lot better off today than in 1995. (Windows 95)


Depends on what you mean, there is no real reason to blank memory just because a program is exited.


A program should not hold memory anymore when it is no longer running. If it did, that would be an OS memory leak.


A process which doesn't exist cannot hold memory. But the OS can certainly chose to defer the erasure as long as there's no better use for that memory. This is often done to speed up the performance of processes which are frequently quit/stopped and reopened/started.


> A process which doesn't exist cannot hold memory

Not quite. Some leaks are across processes. If your process talk to a local daemon and cause it to hold memory then quitting the client process wont necessarily free it. In a similar way, some application are multi-process and keep some background process active even when you quit to "start faster next time" (of act as spywares). This includes some infamous things like Apple updater that came with iTunes on Windows. It's also possible to cause SHM enabled caches to leak quite easily. Finally, the kernel caches as much as it can (file system content, libraries, etc) in case it is reused. That caching can push "real process memory" into the swap.

So quitting a process does not always restore the total amount of available memory.


I would expect that if you read data - the GIF image in this case - from the block device, it will stay in pagecache, until there is memory pressure.


Let's be realistic.. let's say you are a c++ programmer and want to learn some modern JS framework. I bet you it will literally take less than a week of concentrated study work for you to become better than 80-90% of people working with it. You can get a book on the subject, there's great tutorials, heck, just reading the reference will get you far.

This is true for a lot of stuff on youtube, coursera etc. I believe. It's for people who don't want to get to the destination faster, by reading a few books and doing the exercises in them.


Everyone has gaps in experience.

I've worked in both of these kinds of domains and different kinds of people thrive at doing each. The kind of problems you face are different.

Most of the backend C++ types I've worked with aren't so great at "design for failure" types of environments whereas on the web development side of things I've found people are much more receptive.

I'm working with a few hundred backend engineers who all have a hard time with thinking infrastructure is always available and can handle infinite throughput. They absolutely stink at reasoning about the network. And these aren't dummies -- they're all MIT/Waterloo/etc grads.


> I'm working with a few hundred backend engineers who all have a hard time with thinking infrastructure is always available and can handle infinite throughput.

Could you clarify this statement? Are you saying you work with hundreds of backend engineers:

- who all believe infra is always available and can handle infinite throughput (???)

or

- who can't wrap their head around an environment where infra is so scalable / high availability that it might as well be "infinite" and so they are always looking designing for tradeoffs that don't exist in your environment?

If the former, where do these people work? If the latter, where do these people work?


You don't have to look hard to find stories of people running up huge cloud infrastructure bills by using it improperly.

Engineers in general poorly understand the connection between their code and resources consumed. Until something bites them and they learn.

I'd say that this is pretty widespread throughout the industry.


I know there are plenty of engineers with those gaps, and they learn from experience, as one does.

My issue was with the statement that all backend engineers you work with (hundreds of them!) have this weakness. Does not match my experience, so I was wondering where you work.


A mid-sized, already-public SaaS company.


The barrier to entry is not the availability of materials, it's the jargon and working knowledge of the ecosystem required to find those materials and decide which are worth your time.

I've been the C++ programmer with a week to learn a modern JS framework. I'll never do that again, and will always hire an expert to bring myself and a project up to speed. It's a massive waste of time and money.

You also won't be "better" than anyone else at it, since expert-level C++ knowledge is not very translatable to other domains (but that's a C++ problem more than anything, working in it is like playing a piano and not riding a bike).


Why on earth would you think that? C++ programmers are not “better” than web developers. They work in very different domains, and skills built up in one don’t necessarily transfer to the other.


C++ has a high barrier because it exposes the inherent complexity of computing. If you've managed to fight through that you can probably pick up the simplified concepts in Javascript, Python, Ruby, etc.


C++’s complexity has nothing to do with the “inherent complexity of computing”. It’s just a kitchen sink language.

Likewise, concepts in the domains of the languages you mentioned aren’t “simpler” — they’re just different.


deleted


What does a brokerage/exchange going public have to do with the top of crypto currencies????

Crypto is here to stay. There's plenty of exciting projects like filecoin for decentralized storage, cardano for 3rd gen smart contracts, and more exciting projects that are coming online.

I hate when people treat cryptocurrencies and stocks as just something you buy and sell.

There's actually a vast amount of people and effort underneath blockchain projects that are trying to solve very hard problems. Just as stocks are an actual part of a company where people work hard to solve problems and create value in the economy.


If history is any indication, we are still in the early cycle of the current bull run. I think they have delayed their IPO long enough, they were founded in 2012.


Getting a big price for the company isn't the only motivation for going public. They have employees and investors to take care of, even if they'd be 2X bigger in 2030 or something.


They also make money when people sell.


Genetics is one thing, but Japan definitely did allow foreign culture into their mix. Just look at the movie "The Last Samurai" to see what I mean. I wonder if it's possible to put out a number for that.

As for my Country, as Europeans we are heavily influenced by Americans, who we completely seem to accept as our cultural leaders. How do the Japanese feel about them though?


Ah yes, the famous Hollywood documentary “The Last Samurai”. :-)


:)

Well.. it is an anecdotal representation of some bigger cultural change that happened, though, is it not?


From where I look at it, Japanese culture seems to be quite interested in adopting exotic elements very quickly and likes them for that reason.

There is nothing more Japanese than a parfait in the afternoon, a good curry in the evening, and a traditional Christian wedding before attending the Sintou shrine for a final blessing.


I would argue that Japan very definitely did not “allow” foreign culture in because the country was “opened” under threat of force by Matthew Perry. Unless you mean earlier entries of culture like firearms and Christianity in the 16th century but I’m not sure that entry can be attributed to Japan as a country that far back.


The last samurai isnt tom cruise it's the dude he meets after being injured by the japanese government, when he begins to appreciate the shogun via his interactions with the last samurai. Comments on the movie tend to give the impression that cruise is playing an asian man rather than an injured western soldier.


Not sure if it applies to engineers, but there's a difference in the job description between defining and implementing. Sometimes someone knows some theory, does some math and is done. And his or her result is passed to another person that implements it. Sometimes it's the same person doing both, but probably less often. Sometimes the implementer grows and starts putting down some ground rules (becomes an architect) or the other way round (for example there because might be more jobs for implementers).

Perhaps another distinction is between trying something and see if it works, and knowing it will work (or won't work) before it's implemented. Then the difference is between knowing from experience and knowing from having studied the theory (and having worked through the proofs).


Afraid so. That reminds me: IIRC in the Dune universe they eventually abandon all technology. I wonder if we're on that track for real.


Yeh, it was sold as a jihad.

But I think Hyperion Cantos by Dan Simmons would fit better.


Not quite. It's possible to live in the modern world using only Free and Open Source software. Stallman does this, but I think he's pretty much alone.


Wikipedia says there are ~350,665 Old Order Amish (apparently in the New World? https://en.wikipedia.org/wiki/Amish#cite_note-Elizabethtown_...).


This is not an exaggeration. Nowadays basically anything you buy that uses electricity also has chips included for various purposes. Those chips of course run proprietary software.


Sure, but I'd meant live in the modern world in the sense of living in tech-enabled society, rather than just living in the current day.


I've read that he borrows people's phones.


IIRC that took a Matrix-style war that humanity just barely won, so I don't think there's much hope among writers that we get to a post-tech universe without breaching the brane of cataclysm, seeing machine-borne oblivion on the other side, and pulling our heads back just in time (or perhaps an instant later...).


> IIRC that took a Matrix-style war that humanity just barely won

Only in the terrible prequels written by Frank Herbert's idiot son, who decided that a rejection of machines replacing humans = literal Terminators.

One of the more interesting things about the Butlerian Jihad is that in the end, without thinking machines, they instead turned humans into machines.


> One of the more interesting things ...

Yes, and this is often overlooked.

The question if we should turn machines into humans or vice versa, and that there is no third option.


The third option is coexistence, though this too is something writers seem to be bearish on (Egan's Diaspora gives it a bit of consideration before simply killing off most organic life with a gamma ray burst, which I found somewhat cynical and or even spiteful, given the timing).


Bit flips can happen, but regardless if they can get repaired by ECC code or not, the OS is notified, iirc. It will signal a corruption to the process that is mapped to the faulty address. I suppose that if the memory contains code, the process is killed (if ECC correction failed).


> I suppose that if the memory contains code, the process is killed (if ECC correction failed).

Generally, it would make the most sense to kill the process if the corrupted page is data, but if it's code, then maybe re-load that page from the executable file on non-volatile storage. (You might also be able to rescue some data pages from swap space this way.)


If you go that route, you should be able to avoid the code/data distinction entirely; as data pages can also be completly backed by files. I believe the kernel already keeps track of what pages are a clean copy of data from the filesystem, so I would think it would be a simple matter of essentially pageing out the corrupted data.

What would be interesting is if userspace could mark a region of memory as recomputable. If the kernel is notified of memory corruption there, it triggers a handler in the userspace process to rebuild the data. Granted, given the current state of hardware; I can't imagine that is anywhere near worth the effort to implement.


> What would be interesting is if userspace could mark a region of memory as recomputable.

I believe there's already some support for things like this, but intended as a mechanism to gracefully handle memory pressure rather than corruption. Apple has a Purgeable Memory mechanism, but handled through higher-level interfaces rather than something like madvise().


The US salary situation is clearly superior. I wish I had somehow made my way into the US after getting my MsC in CS. But some choices led me elsewhere. Anyway, if I did, I'd probably be retired by now.


FWIW: As a dev I've made both contact with Apple and with Google reps and it was like day and night. Apple actually offers support and tries to resolve my problems while Google feels like getting some bureaucracy done at a public office or worse. (Speaking of European bureaucracy, YMMV.)


Well the difference is that as a general app developer you barely ever need to interact with Google. As for Apple, you do it a lot. I'd rather take rare and abysmal interactions than constant, annoying ones.

I've had numerous app rejections because of reviewers simply incapable of reading instructions, and it's immensely frustrating. Especially when important hotfixes etc. is put on hold for days for no reason whatsoever.

Instruction: Do NOT tap button X to log in, instead use method Z.

Rejection: Tapped button X, could not log in. Your app is broken.

Welp, time to resubmit and wait for a couple of days to possibly get the same rejection again.

EDIT: To clarify, the login procedure is different and simplified for test accounts, such as the ones reviewers are using. Real users need to identify with real ID for (valid) reasons.


> Well the difference is that as a general app developer you barely ever need to interact with Google.

Those days are over. Want to access text messages because you have 2 factor logins? Want to access phone logs because your apps measures how much time you spent on the phone with each of your clients?:

Be prepared for a lot of bureaucracy.

Of course you can't even access texts or calls on an iOS device, but then again when that's the case none of your customers can ever force you to build a feature around it.


Those permissions are rather easily abused so I'm glad Google is protecting my privacy by restricting them.


Sure but the bureaucracy was shocking


Years ago, a family member of mine hired a college student to develop an informational application for their small business. This app offered reference guide type information for a niche. To set expectations, my family member paid sub $10k for the entire app to be developed when mobile apps were new.

After a few years it had attracted a few thousand users but needed updating and the developer was non-responsive. The family member of mine was non-technical and had allowed the developer to publish the app under their own developer account.

A saga begins that I won't bore everyone with the details but basically this family member didn't want to lose the thousands of users. They tried to get the developer to send them the app to maintain but the developer was non responsive. They tried to enforce their trademark on the app but Google would only delist it.

Now they had no listing at all for their company so they tried to start over. They tried to create a new app with the same name but Google's review process wouldn't let them because another app had already existed with that name. Armed with a trademark and people we knew who worked at Google we got exactly zero steps further after three months of trying to work with Google on the issue.

Eventually, we tracked down the mother of the developer who had ghosted on us and paid them to give us their developer account. Where we showed the trademark, had the app re-activated, and moved it to another Google account we controlled.

Basically, Google couldn't help us at all. It was a mess. Eventually we got things sorted but we had to go around Google.

Was this Google's fault? Heck no. The family member got unprofessional help from a student developer who ghosted on them but Google didn't make it easy to fix the issue. They made it impossible.


So to recap: party Foo tries to take over the developer account of party Bar, using trademark law. Google makes this not possible.

How is this a problem?


Party Foo allows party Bar to release an app using Foo's trademark. Party Foo wishes to release their own app using their trademark, as they've rescinded the permission of party Bar. Google makes this not possible.

How is that not a problem? Yes, parties Foo and Bar probably used the wrong procedure when releasing the app, but can't fix that.

Google has no exception handling ability, and it's awful. You can't merge G suite organizations when there's a corporate merger. Clearly, you should have known five years ago, that you were going to be purchased by X. Same story, no exception handling.


Google is not alone when it comes to poor exception handling. The case you cite (a corporate merger) is something they should absolutely support.

I recently had a problem with Dell/VMWare when we wanted to change the domain name associated with a VxRail cluster. After working with their support teams for months, they eventually threw up their arms and said; "It cannot be done unless you reset and do a fresh install."


That certainly sounds less than ideal. I have also had a few interactions of this nature with Google, and unless you have contacts in the company or have some sort of partnership, it's very hard to get any form of manual intervention.

That being said, Apple is also known for being incredibly draconian when it comes to account management. I don't think you would have been in a better position on iOS.

I think understaffed, off-shored and with a lack of permissions is just the baseline when it comes to this sort of tech support.


The fault was thinking you could pay less than $10k and get competent professional development devices for your app. That’s less than a month of a professional developers time. I had to pay half that just to get the interior of my house painted, and it took two people less than a week.

And without a maintenance agreement the developer isn’t going to help you, they have their own life to live. You think they are going to take vacation days from their next job to figure out that old code? As usual the problem is the client.

Full disclosure: I write this as a contract developer who had to take over an active app on the store when the client fired the previous developers, and tried to update it themselves. I have to update 140,000 lines of code with zero comments or documentation, and the previous devs aren’t accessible. In my case the clients screwed themselves, but got lucky cause I’m very very good.


So it's not possible to steal accounts - sounds like a good thing to me


Forgive my ignorance, but why would you ship an app with a broken login system (or whatever) in the first place?


I updated the post. The normal login flow requires swedish digital ID. Reviewers won't have access to that.


I see, thanks. I can imagine how frustrating that must be, "I don't have a Swedish ID therefore your app doesn't work".


On a side note; Not having a Swedish ID in Sweden makes a lot of things very cumbersome and some even impossible, having one makes one of the most straightforward and convenient bureaucracy systems I have experienced.


Yeah. The increasing reliance on BankID in Sweden is a blessing and a curse. For us swedes born into the system it's incredibly convenient.

On the flip side I've heard my fair share of horror stories from expats that get locked out of necessary services only because they don't have a social security number and bank account (yet). And that process can take a while.


I hope we'll reach a point where we have a better system than a simple Social Insurance Number in Canada, which has no cryptographic protection whatsoever and can be major pain in the butt if leaked from a data breach like with had with the Desjardins Credit Union.


It's very frustrating indeed! I don't know how many times I've edited and attempted to clarify the instructions, but I'm still getting bounces. I really sympathize with the reviewers who are probably under a lot of pressure. But it doesn't change the fact that a hotfix release of our app on iOS is anxiety inducing.


As a practical solution, I wonder if you could provide the reviewers with a fake id that you hardcode into the backend for test accounts. Whcih could allow them to use the same login UI (even if the underlying codepath is different)


The ID login flow is basically UI-less. The user taps the login button, a separate identification app (that basically all swedes have) is launched, and as soon as the authentication is completed the user is navigated to the logged in view. It's a very seamless experience, and a lot of swedish apps work this way.

On the other hand it means that it's impossible to determine which user is logging in until the proper auth is complete. And thus you cannot have "special accounts" using this flow.


Ah, the login flow is in a separate app. That does indeed make it tricky!


Why do people deliver software with bugs at all???


Agreed, why deliver an app with an egregious bug you know about?


Because the Powers that Be insist on making a particular release date, consequences be damned.

I am currently in this situation.


Is it my imagination, or has people's ability to detect and understand sarcasm just fallen off a cliff over the past 1-2 years?


It's been longer than that. I would expect this is a sore point with people because few professions allow their practitioners to knowingly ship defective products to meet a deadline.

Alternatively, why understand sarcasm when the lack of understanding provides some folks with an amazing weapon?


It would appear so.


You need to do a better job documenting test logins and instructions for reviewers. Not defending Apple, but don’t half-ass the things you control when you go to review.


I don't know how you got access to our developer console, but you need to stop.


Why do you have button X if you're not supposed to tap it? Will all your users read, understand and follow your instructions?


Sorry for being unclear. I updated my post. The service uses swedish digital ID verification. This is not feasible for reviewers.


You can't be the only app doing this. Others must have been approved. How did they handle it?


Lots of Apps have special log in system or even Apple Review User account just for the reviewers.

They will have the same hurdle. And resubmit again and again and possibly; again. That is why many developers are so frustrated. It isn't some one -off problems. It has been going on for years.

Just like the Butterfly Keyboard, it wasn't until a journalist wrote about it and mainstream media pick it up causing Apple PR damage before Apple acted on it. Just the same with App Store review. This time with DHH.


Don't get me wrong, most of the time they read the instructions and everything works great. No issue.

But the uncalled-for rejections happens enough that we can never feel confident. As I say, it's a major nuisance, but it isn't unworkable.


You create special ids, and logins for Apple reviewers so you don’t have this problem. Or you decide that’s too much hassle and accept the extra days in review as a different cost.


The same way. Resubmit until a reviwer reads the testing comments.


Your users will absolutely tap X. They will find your app is broken.


Well, that is what most of your users would have done anyway. You dealt with a reviewer instead of multiple angry users that couldn't log in, looks like the review process works.


It's a different login procedure for test accounts (such as the ones made available for reviewers).


Curious: any reason it can't be the same button?


I updated my post, the user facing login is using Swedish digital ID, which naturally the reviewers do not have access to.


Yeah, it’s amazing how painful Google makes any sort of developer support for a company that’s supposed to be “developer-centric”.

With Apple, you may have to convince them of your opinion, but you can very quickly talk to a human who will reply with an actual, thoughtful response.

With Google, if you manage to get a human on the other side of the line, you’re probably weeks or months later, several automated forms and replies deep, and completely confused.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: