My general philosophy on this is clients will find ways to circumvent literally any special validation you are doing because they are vaguely in charge of their browser. Definitely use html form types, input, and pattern to attempt to ensure the user can't just do something wrong, but the backend needs to be the authority on what is actually valid. I like this unix style "strings as the universal data type" idea - it's not about how it happens, it's about the output. I honestly can't comprehend how something thought it was better to do `const validKeys = [NUMPAD_1, NUMPAD_2, ...];` and not `const validCharacters = '01234567890';`.
I'm working with a client on a greenfield project and I picked postgres as the tech stack. For the staging server, I just locally installed postgres, configured it, and it works perfectly fine. On the flip side, I'd rather just focus on code, and if there's a free tier (which neon has), I'd rather shell that off to a service.
So, my question is, what trade offs am I making other than a persistant/local db to off-site (ie probably a degree of speed). Since it's free, does that mean my data might be inspected? I'm under an NDA, and my client would prefer his data stays in-house unless there's a good reason for it not to be.
The free tier gets spun down to idle after five minutes of inactivity. The first request after that usually fails to connect as it takes a few seconds for it to come back up.
Neon cold starts are targeted at just a few hundred milliseconds. Anything on the order of seconds would be a regression in our minds. Obviously this depends on geographical latencies, etc. We are always looking to improve cold starts.
I'm on the free tier. In my case it looks like adding `app.config["SQLALCHEMY_ENGINE_OPTIONS"] = {"pool_pre_ping": True}` for the Flask-SQLAlchemy configuration did the trick. I hope to be a paying customer soon :)
We do not inspect user data. We don't even connect to user databases, unless given permission to. You can read our privacy policy here: https://neon.tech/privacy-policy.
Neon will never be as fast as a database local on your computer, but performance is always something we are paying attention to.
If you’re aiming for EU compliance you’re going to need to host the data within the EU and only have EU staff have any sort of access, like running support on it. Microsoft is exempt from that last bit for some reason, but they are Microsoft so they probably cheat.
Won’t be a lot of EU enterprise that will be capable of using your services without rather strict compliance. Which may or may not be in your interest but you might as well just be up front about it. With the way EU is heading in regards to data protection it may not just be enterprise organisations either by 2025. Those compliance laws are getting stricter and stricter by the day.
Use the right data format for the right data. CSV can be imported into basically any spreadsheet, which can make it appealing, but it doesn't mean it's always a good option.
If you want csv, considering a normalization step. For instance, make sure numbers have no commas and a "." decimal place. Probably quote all strings. Ensure you have a header row.
Probably don't reach for a CSV if:
- You have long text blobs with special characters (ie quotes, new lines, etc.)
- You can't normalize the data for some reason (ie some columns have formulas instead of specific data)
- You know that every user will always convert it to another format or import it
The biggest thing is people have bad experiences with reviewers, and people have a lot of insecurity when their work is scrutinized.
The biggest issue with code reviews as a process is it's always positioned as being adversarial. Often, you set up a pull/merge request, and someone later does a review, but it's not personal, it's cold and blunt data. Even with a reviewer who has the best intentions, it's tough.
Pair programming and review can _help_ with this. Sit with the person who wrote the code, and review together.
I'm with you, code reviews are great for learning, but like you said, you need to see it as a tool to help you succeed and learn, and not as a tool to show you how you're wrong.
At a previous place our code reviews just turned into a quick glance at the code without a real understanding of anything. On one hand I'd like to say it was built on trust in the engineer and test suite, on the other, there was just so much complication and tediousness in some things we did that it was hard to know what was going on in the first place.
If I want to do something like initialize codemirror when my component attaches to dom, I need an event/hook for that. I can't do that solely based on state subscription?
I bring my laptop to all my job interviews, and as we start, say I'd like to show some demos of things I wrote, and ask if they want any code tours. I have several projects that showcase different programming philosophies and styles, and try my best to match those projects with the company I'm interviewing for. I find that for consultancy jobs, this has a very good success rate.
I'm a huge fan. I used to volunteer on allexperts.com to help people with QBasic stuff, before the dawn of stackoverflow and other probably more searchable places.
I think headers were a great tool from a time before intellisense where developers and teams would include function + comments on how to use it.
On the flip side, I think there are still a lot of developers who don't use language servers or use the equivalent of notepad who do rely on separate header files as a means of documentation.
That said, it's always been painful to structure inter-related headers and source files to avoid circular dependencies, and if modules truly resolves that, I'm happy to see it.