The “we bring you new customers” pitch that food portals is incredibly deceitful. Portals didn’t arrive on the scene with millions of customers to send to restaurants. They arrived with zero and siphoned them away using these dirty tricks (mini sites pretending to be made by the restaurant, phone numbers owned by the portals, requiring the portal stickers on the restaurants’ windows).
> We can also give you access to receiving early fraud warnings from issuers (like Visa TC40s).
Can we get webhooks for these?
We currently have to manually refund charges when we get the ‘suspicious transaction’ email from stripe, which is a pain.
That’s exactly what we’d like. 95% of our transactions come from trusted customers which have ~0% chance of a chargeback. We’d like to apply this check and insurance on only the transactions that we are not sure about.
Requiring it to be all or nothing makes the feature useless (not cost effective) for merchants with a low fraud rates.
Adverse selection inflates premiums, simply because people who need insurance are the ones who buy it.
The point of insurance is to transfer* risk to the insurer. The insurer does that by identifying a group that is homogenous enough that their premiums are just slightly over the payouts.
So an insurer can improve competitiveness by selling multiple products that cover different risk groups. I imagine that for Stripe, the risk variance falls in a fairy narrow band: bounded at the low end by not being worth insuring, and bounded at the high end by merchants losing their account.
* As opposed to say retaining risk, e.g. you don't buy collision on a beater.
I would imagine stripe will always block charges it thinks are likely to be disputed. They aren’t offering carte-blanche to accept transactions that are likely to be fraudulent.
Perhaps it wouldn't work for stripe (to allow the merchant pick and choose which transactions use this service). But if it doesn’t it means that the service won’t be used by clients with low fraud rates.
That's how insurance works: it relies on the majority of activity being fine and only a minority of it actually needing insurance. If you could select only the transactions with high risk to be insured, they'd have to charge a lot more than 0.4%.
This reason for microservices comes up over and over but I’ve never understood it.
If you scale a monolith, only the hot code paths will use any resources.
Having a monolith makes it _easier_ to scale as you can just scale a single app and it will be perfectly utilized. If you have microservices you need to make sure every single one has headroom for increased load.
Scaling up a whole monolith only to increase capacity for one code path doesn’t have any unnecessary overhead as far as I can see. Remembering that if your microservice is on a VM then you are scaling up the OS too which has millions of lines of code - something no one (rightly) is concerned about.
I think you're right that this point is often overstated.
It does isolate each service from the scaling _issues_ of the others somewhat - if one service gets overloaded and crashes or hangs then other components are less likely to get brought down with it. In the best case if work is piling onto a queue and the overloaded component can be scaled fast enough even dependent components might not notice too much disruption.
Another advantage is that you can scale the hardware in heterogeneous ways - one part of the app may need fast I/O and the rest of the app might not care about disk speed much at all, so you can have different hardware for those components without overspending on the storage for the rest of the app. I think that's a minor advantage in most cases though.
A sort of in-between house is to have multiple copies of a monolith which are specialised through load balancing and/or configuration values and put onto different hardware, but all running the same code. Probably not that widely useful a technique but one that we've found useful when running similar long running tasks in both interactive and scheduled contexts.
Well, the hot path isn't necessarily the good behaviour one you would choose to scale.
e.g. login service DoS'd, but logged in users can continue using widget service, rather than it grinding to a halt as the login path heats up and consumes all resources.
back in the day we had a common micro service. it was called pgbouncer. Because database resources are limited it’s nice to have a stable consistent connection limited a set number of processes and then let the application monolith scale independently. Also, when you are scaling across multiple machines you don’t need all of the code of the monolith on every machine. i’ve heard about When amazon switched to a SaaS model, creating aws they were locked into a 32 bit monolith in which code was limited to a few GB and thus scaling resources independent was valuable. Was this useful? I am collecting data on if i’m polite, without snark and helpful.
You're assuming an app that scales linearly with hardware. That's very hard to engineer. That's in fact the problem we're trying to solve: The hardware's there but the app can't make use of it due to some contention somewhere.
If you scale a monolith you have to scale the whole thing.
You seem to be describing that a monolith is more efficient that a microservice in a non scaling situation, which is true, but I think you have missed the point on scaling.
Sounds like a bad situation. If I were you I would simply change the app ID, create a new Play account and publish it there. If there problem is with the connection to a banned account and not the app, then this removes that problem/connection.
Do NOT do this. Google will then ban all your accounts including Gmail, drive and ads as creating an account to circumvent their ban is really dangerous.
This ban is lifetime unfortunately and you or related accounts are not allowed on Google ever.
Did at&t or bell telephone have the absolute power to keep you off of the phone network? There is no way that they could technically or realistically. Yet google can do both.
If Google is anything like Amazon, that will probably not be as easy as it sounds. Name, credit card, address, any of those will be used to link the newly created account with the old one and it will get banned again. If Google is feeling particularly clever, they will have fingerprinted the app somehow and changing the ID won't be enough. This is, after all, a company who makes their living on such techniques.
Won't they associate you again to the banned account through either your address, credit card, phone number, email or anything shared with the old account? It looks like they apply very extensive transivity, including (from a previous story I read on HN) using recovery email address to associate and ban accounts.
There would no doubt be people succeeding at this, but it would be a real pain ensuring that none of the details match to the old account and that no future actions end up causing a connection.
Similar situation here – large increase in price for offering the same thing. In the past they’d jacked up the price saying ‘they’ve built new stuff so should charge more’ but have no option to remain using only the old stuff. IMO they completely abuse their customers, and at the same time I don’t blame them one but as customers like us are so locked in that it’s too much hassle to move.
2 - every operation, every brush stroke, should be nondestructive, stored on automatic layers, and always adjustable.
3 - the rendering pipeline should be built from the ground up to support the nondestructive workflow on modern hardware. This means aggressive caching of layers with GPU compositing throughout.
4 - a full commitment to open data formats. Since a nondestructive workflow requires storing all of the instructions for every edit, these should be save-able in an open, human-readable format such as JSON. This would open the doors to scripting the editor with external tools, with the nondestructive editing instructions coalescing into a standard API.
Have you estimated the menory requirements for what you are proposing? Each brush stroke would need to store some image data for this to work. Just storing e.g. pen movement and rendering the result on the fly will likely not recreate the same result in an uodated version of the program, altering the image that the user created. Also, reopening complex files would take longer than users are willing to tolerate. The undo stack (which internally is pretty much an implementation of non-destructive editing) is about as far as you can bring the concept realistically.
Non-destructive workflows also have a huge maintenance problem: you can never again touch code that is involved in the creation of the final result after it has been released because it might break the user's files. It is not just the loading and storing part, but the whole backend can never be evolved in a reasonable way without breaking backward compatibility. After a while you sit on a pile of code for legacy operators and data models that you need to support and stops you from doing meaningful development.
I love non-destructive editing in many cases, but I also have some experiences with implementing that concept and this has shown me how constraining it is and how much effort it takes.
Each brush stroke would need to store some image data for this to work.
No, brush strokes would be stored as paths that simply record the input information needed to reproduce what the artist did with her mouse or stylus.
I hear you about the issue of breaking changes though. I think it takes real discipline to develop a format that's adaptable and with the capability to let you migrate files without visible changes. It would take one hell of a test suite though.
Google Tilt Brush saves brush on strokes. It takes roughly a minute to open a moderately complex sketch sketch in that program.
The only way to not change the output is to match the the already released operators exactly. This is almost impossible unless you freeze the code for each operation after it has been released originally.
Well, what if your image editor used an interpreter and the nondestructive edits were recorded in the document as scripts which, when replayed, construct that layer of the image. That way you can change the application all you want as long as you don't break the interpreter.
As for Google Tilt Brush, well that's just a failure to use proper caching.
Proper caching brings us back to the question of memory usage that I started out with. For the record, I don't think that Tilt Brush can be accused of a failure to cache properly. It need to recreate sketched 3d models for VR and I thibk there is some kind of LOD stuff going on when creating the final geometry, based on the performance of the computer it is running on.
And yes, any descriptions of nondestructive edits is by its nature a script that needs to be replayed by an interpreter to get the final state of the document. No matter the level at which you introduce the interpreter, you cannot upgrade it without risk to backwards compatibility to existing documents.
And if you you want the script language to be low level enough so that the implementations of your operators need to be dumped into the document, then you need to (a) write about half of your program in said scripting language and (b) end up duplicating that in every dicument that is created.
Proper caching brings us back to the question of memory usage that I started out with. For the record, I don't think that Tilt Brush can be accused of a failure to cache properly. It need to recreate sketched 3d models for VR
Oh, so Tilt Brush is 3D? So it's not at all comparable to a 2D image editor. I don't see how proper caching of a fully nondestructive editor should use any more space than a Photoshop document with a ton of layers. In fact, I think it could use a lot less, since Photoshop wastes tons of space when you duplicate source images in order to build up filtered layers with masks and so on. A nondestructive editor could save the space by not duplicating those source images so much.
And if you you want the script language to be low level enough so that the implementations of your operators need to be dumped into the document, then you need to (a) write about half of your program in said scripting language and (b) end up duplicating that in every dicument that is created.
Yes, this is what I had intended. But I don't think it's as bad as you say. You don't need to dump the bytecode (or equivalent) of your entire editor into every document, you only need to include the bytecode needed to reconstruct each of the edits made by the artist.
The editor itself could even include the source code of all its operators and let the artist modify it as she goes. It would be like the Emacs of image editors!
Tilt Brush is comparable in that it has it need to work as ypu propose internally. And all it has to do is fill relatively small vertey buffers with geometry, yet it takes forever to load a file.
I am not going to create a full model to estimate memory usage and of the different designs. It is however clear that a nondestructive approach is more computationally heavy in principle because it will always rerun the same operations more often than a destructive one. And all these times do add up. An operation that takes 100ms after a click or keypress is perceived as practically instant. But 100 of these take 10 seconds. And there are operations in Photoshop that a lot longer than that.
What you say is all true, but I think it is still worth a try. There are a lot of optimizations that Photoshop simply doesn't do. Its ancient code base leaves a ton of room for improvement with a fresh start. I don't know who you get to build this thing though. Maybe I will one day.
How do you know that Photoshop could be a lot faster than it is right now? I would not dare to make such a statement unless I have seen its current codebase.
https://www.flipdish.com/careers/
Mention HN in the application and I’ll make sure it’s prioritised.
I’m totally biased, but I think we’ve a great culture and lovely and friendly tech team who love solving problems together and getting stuff shipped.