I would generally double whatever your expectations are for the initial feature development. Tests are essentially a second implementation from a different angle running in parallel, hoping the results match. Every feature change means changing 2 systems now. You save a bit of subsequent time with easier debugging when other features break tests, but that's somewhat eaten up by maintaining a system twice the size.
There are reasons many MVP developers and small teams whose focus is more on rapid feature implementation than large team coordination or code stability forego writing tests. It doesn't make sense in all circumstances. Generally, more complex, less grokable, more large-team-oriented or public library code is when you need testing.
Tests are only a second implementation if you use test doubles incorrectly. Test doubles should only be used for I/O outside of the program under test that you can’t really run locally / is a network dependency (eg mocking a SaaS service or something) or for performance (mocking database responses vs spinning up a test database instance). If you do it write, most of your tests are just testing each layer and everything below it.
I have yet to see a case where omitting tests actually helps you move meaningfully faster - you’re probably generating more heat than light and that makes you feel like you’re moving faster.
I guess I'm referring here to the unit test standards that seem typically employed by automated code review frameworks - i.e. "100% coverage" checkers. With those, any subsequent change to the system requires understanding and modifying both the original code and the set of tests and seems to end up being around double the effort. It's not actual duplication - but mocking expected inputs/outputs takes on its own system (and often programming language in most frameworks) which is not always trivial. You may be referring to something different though.
In those situations, yes I stand by them being a costly overhead - which makes sense in large collaborative systems, but not so much in small agile MVPs.
There are reasons many MVP developers and small teams whose focus is more on rapid feature implementation than large team coordination or code stability forego writing tests. It doesn't make sense in all circumstances. Generally, more complex, less grokable, more large-team-oriented or public library code is when you need testing.