Hacker News new | past | comments | ask | show | jobs | submit | kennethko's favorites login

An evergreen subject. I could write a long comment here, but I'd just be recapitulating what I've already wrote, which is that this is probably the single most overrated piece of technical writing since the advent of the public Internet. What was true in it wasn't new, and what was new wasn't true. All you have to do is read _Peopleware_ (which arguably has had a rough going itself) to see why.

https://news.ycombinator.com/item?id=35939383

Raymond's personal and political inclinations always complicate this, but my fascination with his downright weird role in our professional culture long predates an understanding of his ideology. I think the case I made in the preceding paragraph is easy to make on the merits. It's why there was a comic called "Everybody Loves Eric Raymond". He's a figure of fun, and a lot of people haven't caught up with that.


So the thread model here is transactional emails? Don't modern services solve this problem just by HTTPS-linking back to the service?

I’m very jealous of people capable of pulling this off. I guess there is no way to do something like if you are not in the right circles.

What's "the db"? It sounds like something of small to medium scale if you can just restart it like that.

In any case, why not just relocate some vendor engineers on site for a bit? Or, better, why does the vendor not have a small presence in the corner?

Sounds like whatever "the db" is it's probably some (objectively) small but very scary thing that's currently on fire and people are trying to figure out how to put it out without crashing the plane and also making too many waves internally, which is probably even harder. So asking about making vendor noises is (as useful as it may be) probably going down the wrong path - in much the same way this is probably not related to the outages (it may well be, but from the outside it's all coincidence anyway).


> but the flip side of this is, do most users of web2 services want their data collected, processed, analysed, rebundled, and resold to the extent that it is?

Facebook doesn’t sell user data. Google doesn’t sell user data. Twitter doesn’t sell user data. They sell ads. The user data is strictly kept internal to the company because selling it would weaken their market advantage.

This is one of those weird myths that has been propagated by fear-mongering journalists and politicians. You’d think tech people would be the first to call out this logical error, but for some reason it has been embraced as the ground truth despite being trivially easy to fact check.

> Whether you like it or not, you are paying for the services you use, but in ways that are often opaque to you.

In 2021, the trope that “If you’re not paying, you’re the product” has been repeated to everyone a thousand times over and it’s old news.

But paying for a product doesn’t mean that your data and usage patterns aren’t still being extracted for profit. Just look at smart TVs.

Blockchain doesn’t magically change this fact. It’s theoretically possible to design systems where certain types of data are obscured or encrypted, but it’s a huge leap to assume that web3 services will, by default, encrypt everything and obscure access patterns. Just look at how easy it is to track Bitcoin transactions between wallets publicly.


"Write it all yourself"

- Install software onto your machines

Package managers, thousands of them.

- Start services

SysVinit, and if shell is too complicated for you, you can write totally not-complicated unit files for SystemD. For most services, they already exist.

- Configure your virtual machines to listen on specific ports

Chef, Puppet, Ansible, other configuration tools, literally hundreds of them etc.

- have a load balancer directing traffic to and watching the health of those ports

Any commercial load balancer.

- a system to re-start processes when they exit

Any good init system will do this.

- something to take the logs of your systems and ship them to a centralized place so you can analyze them.

Syslog has had this functionality for decades.

- A place to store secrets and provide those secrets to your services.

A problem that is unique to kubernetes and serverless. Remember the days of assuming that your box was secure without having to do 10123 layers of abstraction?

- A system to replace outdated services with newer versions ( for either security updates, or feature updates ).

Package managers.

- A system to direct traffic to allow your services to communicate with one another. ( Service discovery )

This is called an internal load balancer.

- A way to add additional instances to a running service and tell the load balancer about them

Most load balancers have built up processes for these.

- A way to remove instances when they are no longer needed due to decreased load.

maybe the only thing you may need to activelly configure, again in your load balancer.

None of this really needs to be written itself, and these assumptions come from a very specific type of application architecture, which, no matter how much people try to make it, is not a one-size-fits-all solution.


You can (and always should) decouple the plan and apply steps. For a non terminal approach this can be done by outputting the binary plan to a plan file (with -out=path) and reviewing the plan terminal output which will show all actions Terraform will perform in human readable format. Then have the plan file be used as input for the apply step. The apply won't perform any other action that was not in the plan file (which matches the output that was reviewed) and if state of the environment it would apply to has changed in the meantime it will abort without causing undesired changes and you can restart the process again.

There is also the prevent_destroy[0] meta argument for resources but afaik it has no effect when you remove the resource from your .tf files[1], so it would not have helped in this case.

[0] https://www.terraform.io/docs/language/meta-arguments/lifecy...

[1] https://github.com/hashicorp/terraform/issues/17599


I totally agree with the above, and have commented on it many times before, but note the 90 day standard is because that is the maximum amount of time allowed for ISOs by the IRS. To allow for conversion after that time (e.g. 5-10 years seems to be what a lot of people are pushing for), the ISOs convert into non-qualified options. Still worth it in my opinion. Even better would be for the IRS to change the law (not sure if it's a law or a reg), because with companies staying private so much longer it's a different world.

I remain baffled at the folk wisdom of using providers outside the United States in order to avoid the Five Eyes IC. You accomplish the opposite thing by doing that. NSA is literally chartered to hack into things outside of US jurisdiction; they don't even need permission to do it. They might even need permission not to do it.

Obviously, hosting in the US isn't a cure-all. And there are other good reasons to work with companies in Europe; for instance, their data privacy rules can often be better than ours, which can give you some commercial protections.

But these discussions about where people's email is hosted always talk about jurisdictional issues, and the only jurisdictional issue that matters here is this: if NSA is going to swipe mail from Google Mail, there's a whole fuckload of paperwork they have to do. If they want to get mail from your random email provider in Switzerland, they can just push a button.


As someone who has argued against a rewrite, lost the argument, and then proceeded to do the rewrite, I would push back strongly on the notion that we have a perfect specification, which is just "do what the old thing did". This specification is woefully incomplete, of course, just as a vague requirements document for a brand new service or product is incomplete.

When someone proposes a rewrite for software, I ask him or her to think critically along the following questions:

1) What is the purpose of the rewrite? What do you hope to accomplish by it? What business objectives are furthered by the rewrite?

2) Explain in detail what is wrong the the existing code base, and why it is untenable to fix those problems piecemeal.

3) Explain in detail how the rewrite will avoid, overcome, or improve significantly on all the problems mentioned in 2).

In my most recent, case, and as I expect in many others, I couldn't convince anyone to engage on any of these questions.

For 1), we were told that the org planned to build significant new features on the product and the rewrite will help. However, the company's priorities changed significantly even as the rewrite was just getting started. By the time I left the company, I was not aware of any short or long-term plans to continue adding functionality to the now-rewritten product.

For 2), the level of detail was along the lines of "the code base is awful. I hate it!" And, that's about it. Question 3) is, of course, impossible to answer if you failed to answer 2).

Failure to be able to answer these types of questions is also in my eyes a strong indicator that you don't understand the existing product very well. And why would we? The existing team that built the thing had all left by that point, which is, in my experience, the norm, not the outlier. It's normal for devs to build something for a few years and then peace out, either via an internal transfer to another team or a new job opportunity.

I believe that much of software is knowledge acquisition, and much of the cost of software maintenance is in dealing with the failure to transfer and maintain acquired knowledge over time. Rewrites can be spurred by ignorance, and that same ignorance can lead to the rewrite taking much longer than expected.


Here's my beef with estimates.

I can give you a really really accurate estimate, but in order to do so we're going to have to spend a lot of time going through the request, building and verifying actual requirements, designing the solution and then validating it.

The process will require dev resources, business resources and probably people from the support team and will take a lot of time.

I'm happy to do it. It's actually my favorite part of the job. But the business invariably doesn't want to spend the time and money to do that.

They'd generally much rather start with a fairly vague description of what they need and let the devs keep throwing stuff against the wall and see what sticks.

Good and accurate estimation is not just a dev function. It requires buy in and input from the entire business stack.


I seem to be a bit of an outlier, but I am not looking forward to generics in Go. I don't like the kinds of discussions it attracts...vague hand-wavy arguments about "expressiveness" and "purity". Expressiveness to me means more arguing in code reviews, and nothing is stopping us from writing pure functions now, I do it all the time.

I maintain a few large Go codebases, tens of thousands of LoC each, and not having generics in practice is way, way down on my list of challenges. Equivalent Java codebases that I've maintained are usually harder to maintain because of generics, with over-eager functional programmers itching to show off their skills in abstraction, instead of solving business problems.

I hope I'm wrong, and that this will make Go a better language. But I doubt it.


Well, in this case it mostly didn't happen in the future - most of it happened last May. Our failure to catch it recently was a mistake, but the root error was building it that way in the first place. Still, I take your point.

After last year's snafu, we had a company-wide retro to discuss what had happened. It lasted a couple of hours, during which several people (including myself) were pretty blunt about what we thought. I don't want to get into a complete postmortem here, if only because it would make this post require ten paragraphs of niche internal culture context, but the biggest problem was that we had an internal communication breakdown. Many people who were uncomfortable with what we were doing, and who could have predicted the public response, did not feel empowered to speak up about it. Not in the sense that they'd face backlash for doing so, but in the sense that they felt they'd be ignored. This was less true than they thought - had they all spoken up at once, it probably would have made a difference - but more true than it should have been.

Since then, a number of people within Triplebyte (including Ammon, but also myself and various others) have made an effort to create space for those kinds of concerns.

One of our defining traits as an organization is that we're very good at creating coherent internal narratives. Everyone knows what we're doing and why. But the flip side of that is that in the effort to create clarity, we sometimes run over inconvenient details like "wait isn't that a terrible idea". That means that to avoid problems like last year's, we need to be explicit and deliberate in actively seeking out dissent about what we're doing, especially from people who are not the very assertive strong personalities that tend to show up in leadership. That doesn't mean that dissent is always right - strategic decisions are difficult and complicated, people do sometimes lack the visibility to see why they're made, and in a room of fifty people there's always going to be some disagreement - but it costs us very little to listen to it and (as we learned last year) can cost us dearly when we don't.

(Actually, as I was preparing to submit this post, one of our engineering leads pinged me on Slack to suggest ways we could avoid mistakes of the kind that spawned this thread - TLDR, build less automated shit that we can forget about until it bites us.)

I realize this reply is a little bit nonspecific ("make space for"? what does that even mean?), but that's the nature of solving a culture problem. You're necessarily wrestling with subtle cues and unspoken assumptions rather than with a thing where you can go "ah, yes, we just need to change step 2a of our product development process". But for what it's worth, I think we've gotten better about making sure we consider how people feel about what we're doing, and we've avoided some potential bad decisions in the year since then as a result.


PMs: Email.

Posts: Blog. [XML-RPC if you wanna email from your phone to your blog. Different post-type or tag for 'like' vs 'post' (heck, invent any 'like' 'like' you like).]

Follow: RSS.

It's all there and decentralised for years, just not in a polished package of a 140 character comment, and likely requires more than 15 seconds of thinking. Actually really straightforward.

Only tricky thing would be SMS but that's... 'depreciated' as a popular feature today.


It's not really about running servers for 10 years. It's about having a platform to build a product on that you can support for 10 years. RHEL software gets old over time, but it's still maintained and compatible with what you started on.

Consider an appliance that will be shipped to a literal cave for some mining operation. Do you want to build that on something that you would have to keep refreshing every year, so that every appliance you ship ends up running on a different foundation?


RHEL and its derivatives are the only linux distribution which maintains binary compatibility over 10+ years while getting not only security updates but feature additions when possible.

This is something I don't think the wider community understands, nor do they understand the incredible amount of work it takes to back-port major kernel/etc features while maintaining a stable kernel ABI as well as userspace ABI. Every single other distribution stops providing feature updates within a year or two. So LTS, really means "old with a few security updates" while RHEL means, will run efficiently on your hardware (including newer than the distro) with the same binary drivers and packages from 3rd party sources for the entire lifespan.

AKA, its more a windows model than a traditional linux distro in that it allows hardware vendors to ship binary drivers, and software vendors to ship binary packages. That is a huge part of why its the most commonly supported distro for engineering tool chains, and a long list of other commercial hardware and software.


This topic just came up recently on a podcast I was on where someone said a large service was down for X amount of time and the service being down tanked his entire business while it was down for days. But he was compensated in hosting credits for the exact amount of down time for the 1 service that caused the issue. It took so long to resolve because it took support a while to figure out it was their service, not his site.

So then I jokingly responded with that being like going to a restaurant, getting massive food poisoning, almost dying, ending up with a $150,000 hospital bill and then the restaurant emails you with "Dear valued customer, we're sorry for the inconvenience and have decided to award you a $50 gift card for any of our restaurants, thanks!".

If your SLA agreement is only for precisely calculated credits, that's not really going to help in the grand scheme of things.


Scalability itself isn't the selling point. It's freedom.

It's the freedom to get things wrong, iterate and try again. It's the freedom to move resources around. It's the freedom of not having to wait for your procurement process and approval from a purchasing department. It's the freedom of not having to wait for your operations team to plug in and configure bare metal. And all of these freedoms have compounding interest.

For most organizations, what I just talked about equates to man years of work wasted and I think a lot of people nitpicking about cloud being expensive have lost sight of this.

Sure it costs more, but it's from money you've already saved. You'll have loads of saved money left to spend on more developers too.


The problem is that fundamentally we always have types. We must define different types of bytes and the different things we can do with those bytes. If you can generate a binary from your programming language, it must use a form of static or inferred typing to know what assembly code to generate. A dynamic language just checks at runtime, and adds a bunch of overhead, which isn't always necessary, and is more likely to result in weird "defined" behavior like we get from javascript and php. Typescript is very popular because you can make certain guarantees at compile time, instead of a vague hand-wave of "the code looks good".

As an example of the benefits of type-checking, in the Vulkan api, there are a lot of handles. VkRenderPass, VkPipeline, VkSwapchain, VkDevice, etc. All of these are just pointers. If we take a function like:

    void vkDestroyCommandPool(VkDevice,VkCommandPool,const VkAllocationCallbacks*);`
And remove the type checking:

    void vkDestroyCommandPool(void*,void*,const void*);
It is the same API, but without type-checking. Even if the parameters are labelled, with this new API I could mess up and pass the wrong thing.

This is a fair question, you shouldn't be downvoted. Lean ticket trackers and productivity tools are perennially launched and invariably become as bloated as the incumbents.

This is because the information in these tools is business critical and needs to be consumed by almost everyone in the company. Moreover, different people need access to the same information in different forms of presentation, with aggregation and emphasis of different data. In contrast it's easy to make a work tracker just for developers - GitHub and GitLab have pretty much solved that problem. Making a ticket tracker which works as a single source of truth for all members of an organization who need access to the tickets is much more difficult.

Developers, PMs, VPs, support staff, data scientists and designers all need access to overlapping information which is ideally stored in the form of tickets. But each of those roles needs something different from the tickets, both individually and in aggregate. You can't just target a single group here, because then you're trying to get the company to adopt separate repositories of work (it's already an additional complexity that work is split between e.g. GitHub and Jira, for example).

So to obtain the critical mass of adoption they need for product market fit and growth, these tools organically evolve to become everything to everyone in an organization. And suddenly your tracking tool is stuffed with metrics, integrations, feeds, dashboards, reports, etc.

The other thing is that these tools become especially bloated by integrations and plugins. A brand new instance of any tracker or project manager feels clean and fast. A few years in, it feels slower and more crowded by all the custom/third party additions rolled into it.


I have short term heart damage from covid. Wanting herd immunity is ignoring the suffering a good subset of the infected will experience. I guess your next line will involve something about suffering being better than dying. And I guess its okay that I can’t play with my toddler because of secondary effects because they were highly unlikely to die.

Text does not convey tone. You are ascribing tone in your mind. Text is language and does not bring along gestures, facial expressions, or eye contact.. things your brain uses to detect tone and subtleties in the words

For instance based on this text your “tone” is combative, but I bet you are reading it in your mind with eye brows raised and hearing a different accent on syllables. Maybe you would say that first sentence a little bit quieter to show compassion, but I’m reading it as if you had an attitude on the words “your tone is”


First, note you're likely to get some survivorship bias in these responses - older people who left the industry are less likely to comment on HN.

That said, as a developer in their mid-40s, here's my take:

1. In general, mid-level engineering management jobs (which I consider Manager to Senior Director level) pay significantly more because they are shittier jobs. Sure, there is the rare soul that loves these kind of jobs, but I think most Directors would freely admit they liked their day-to-day a lot more when they were coding. I find that the type of folks who succeed in these roles have basically stopped caring about work so much and are much more invested in their family life. I.e. they don't "love" their job, but they do well at it because they want to make a nice living for their family.

2. I went the senior engineer -> architect -> director -> senior director route, and honestly I hated being a director/senior director. I don't mind so much managing people, and I really enjoy mentoring, but at the director/senior director level you're doing a ton of managing up, which I hate, and there are a ton of logistical responsibilities at this level that I find mind-numbingly boring.

3. So I switched companies and am now a "principal engineer", which I love and I think is my sweet spot. I don't have any official direct reports, but I do a lot of mentoring and general "team management". Given my history, the senior execs at my company appreciate some of my "management-level input", but they know I'm most effective if I'm not involved in tweaking job-level band discussions. To echo another commenter, I do just enough management-level stuff to keep me involved at a high level, but I spend the majority of my time writing code, doing code reviews, and working closely with product management to give engineering input re: new features.


TL;DNR, use a language your company can support. It doesn't matter how suited to the job a language is, if it's single Engineer or small team, what happens when they move on? How do you support it? Who's on call?

Not Elixir, but a cautionary tale from our Erlang project. ~8 years ago a our IoT backend was written in Erlang, this was the early days of IoT, so sure it made sense as a technology, could scale well, handle all the symmetric sessions, etc. It was a good tool for the job and in theory could scale well.

But, there's always a but. We're a Python shop on the backend and C on embedded devices. Engineers move on, there some lean times, and after a few years there was no one left who could support an Erlang system. So we hired for the position, and it's key infrastructure, but not really a full time project, so we hired a Python+Erland person.

But! Now they're on call 24/7 since the regular on call roster are python people and when the Erlang service goes wrong, it's call the one engineer. So, do you hire 2 or 3 people to support the service? No, you design it out. At the time is was 2018, and IoT services were a plenty, so we could use an existing service and python.

Another way of looking at it, let's say it's a critical service and you need at least 3 people to support it/be on call. If each engineer costs $200k/year, that's $600k/year. Does this new language save you that much over using a more generic and widly known language in the org?


I've been wanting to do a blog series on how we use Go. Here is a short version.

Tests:

We use the testing package for unit tests and (maybe too much) use interfaces as arguments so we can create test fakes that behave the way we want so we can validate error paths, logs (yes, we assert on logs), metrics, and, of course, green/good expected behavior.

We then have acceptance level testing. These are ensuring the system works as expected. We leverage docker-compose to spin up all our dependencies (or, in some cases, stubs - but only rarely). We then have a custom testing package built atop the stdlib one. It behaves very similarly, but allows for the registering of test suites, pre and post test suite methods, pre and post test methods, and generates reports in json/xml for QA to keep track of test cases, when they ran, pass rates, etc. As part of our SOC2 compliance, we have these to back up our thoroughness in testing. Tests also can have labels so we can run all tests for a given feature only, or a given suite. These tests hit the running binary of our service under test, so if it works here, it will work when deployed.

Before a service makes it to prod, it lands in staging. There, a final suite of tests go through user features and ensure that things are ok one last time. Total black box.

Dependency Injection / Mocking:

I am very, very much against mocking. For that, I did write a blog post; though, I think the thing it highlighted most is I need to write more :). You can goolge "When Writing Unit Tests, Don’t Use Mocks" if you want to read it. When you mock, you create brittle tests that are tied to the assumptions in your mock. Instead, we use "fakes." These are test structs that match an interface and allow us to control their behavior. You might ask how that is different than a mock. Mocks have assumptions and make your tests more brittle and subject to change when you update the code (which is what Martin Fowler concluded in Mocks Aren’t Stubs). People tend to write "thing called 4 times, with arguments foo and bar, and will return x, y, z... blah blah blah." Instead, when you use a fake struct that matches an interface, you can make them as simple or complex as needed, and usually simpler is better. Return a result or an error. Validate the code does what you need. We also avoid functions as parameters for just testing. IE, your test code uses a custom function that is not the function used in prod. These are easy to cause nil panics and are kludgy. Fakes get us what we need 99 times out of 100.

Routing frameworks:

We have folks who don't use them, or use gorilla mux, or chi (my fav). They are convenient and make things easier for passing URL parameters. You can, of course, do this without a customer router. I like Chi because it is stdlib compatible.

HTTP Frameworks:

Nope. I could see, maaaayyyybee if we were writing a bunch of CRUD apps, but we don't. The services my team makes tends to have few routes and not all the CRUD stuff. Even then, if we do a lot of CRUD work, it will only eat up a few days. What we do have, however, is a project skeleton generator so our projects all start out with the same basic directory structure and entry points. Everyone knows that your app starts in cmd/appname/main.go for example.

Logging (and errors):

The other one we leverage in place of HTTP Frameworks is a custom logger and an experiment we are doing with custom error types. We have logging requirements to play nice in our ecosystem at work. Logs all are structured json and have some expected keys. The logger generates all that. We looked at all the log packages and none matched exactly what we needed. We can store key value pairs on the logger and pass that logger around (so you only have to do logger.Add("userid", userID) once and now all logs going forward in a request will have it. You get timestamps, app name, and a few other fields for free. You can create a child logger that will have its own context kv pairs so you don't pollute its parent (helpful for when you go into a function and want to add more details to logs based on errors specific to that function). The other one that we are playing with now on our new project is a custom error type that stores a map of key value pairs so we can just bubble up our custom error type, wrap it with more kv pairs on each bubble up, and then only log at the top, and then when the error is logged, we use our logger to extract the KV pairs and bingo, structured logs with context for each error bubble up point with potentially relevant kv pairs that are only known down in deep levels.

BuildPipe:

We run our tests locally usually. But when we create a PR, a build is kicked off using BuildKite (plugin system is really nice). A PR cannot be merged to master until the test suite passes, which includes the acceptance tests from earlier. After merging to master, a fresh build is run again and that creates artifacts that are then used by ArgoCD so we can roll our code out to our kube cluster.

I love Go. It is my favorite language I've worked in. There are warts, for sure. You can get that list anywhere. There are some oddities when assigning to structs in maps, nil interfaces, shadow variables, non-auto-checked discarded errors, and others. The biggest wart right now is the module system. I think that will improve over time.


I heard this on Lex Fridman's podcast with Microsoft CTO Kevin Scott on "storytelling"

Lex Fridman: Microsoft has 50-60 thousand engineers. What does it take to lead such a large group of brilliant people?

Kevin Scott: ...(snipped)... One central idea in Yuval Harari’s book Sapiens is that “storytelling” is the quintessential thing for coordinating the activities of large groups of people once you get past Dunbar’s number. I’ve really seen that, just managing engineering teams. You can brute-force things with small teams, but past that things start to fail catastrophically if you don’t have some set of shared goals. Even though this is sort of touchy feely, and technical people balk at the idea that you need to have a clear mission, it’s very important.

Lex Fridman: Stories are sort of the fabric that connects all of us, and that works for companies too.

Kevin Scott: It works for everything. If you sort of think about it, our currency is a story. Our constitution is a story. Our laws are a story. We believe very strongly in them, and thank God we do, but they’re just abstract things, they’re just words. If we don’t believe in them, they’re nothing.

Lex Fridman: In some sense, those stories are platforms.


Sure thing:

1) Login to settings and enable multiple inboxes. This is my setup:

has:red-star URGENT

has:orange-star Important

has:blue-info Pending on someone else

has:purple-question For reference

Panel position: Below the inbox

2) Still in the settings, enable stars in the order above.

3) When you check your mail, send off instant replies, archive or star the rest accordingly. To get the orange star you just click the star icon twice, three times for the blue one etc. The purple ones I use to store things like documents I may reference later etc. In practice, I focus on the red stars every day and try to work through the orange ones maybe once a week. You could proberbly do without the blue, but its handy when your in a project management like role. You can run through them once a week and ping people for updates.


I am a solo founder of a website monitoring SaaS [0]. Theoretically, my uptime should be higher than that of my customers'. Here are a few things that I found helpful in the course of running my business:

* Redundancy. If you process background jobs, have multiple workers listening on the same queues (preferably in different regions or availability zones). Run multiple web servers and put them behind a load balancer. If you use AWS RDS or Heroku Postgres, use Multi-AZ deployment. Be mindful of your costs though, because they can skyrocket fast.

* Minimize moving parts (e.g. databases, servers, etc..). If possible, separate your marketing site from your web app. Prefer static sites over dynamic ones.

* Don't deploy at least 2 hours before you go to sleep (or leave your desk). 2 hours is usually enough to spot botched deploys.

* Try to use managed services as much as possible. As a solo founder, you probably have better things to focus on. As I mentioned before, keep an eye on your costs.

* Write unit/integration/system tests. Have a good coverage, but don't beat yourself up for not having 100%.

* Monitor your infrastructure and set up alerts. Whenever my logs match a predefined regex pattern (e.g "fatal" OR "exception" OR "error"), I get notified immediately. To be sure that alerts reach you, route them to multiple channels (e.g. email, SMS, Slack, etc..). Obviously, I'm biased here.

I'm not gonna lie, these things make me anxious, even to this day (it used to be worse). I take my laptop everywhere I go and make sure that my phone is always charged.

[0] https://tryhexadecimal.com


One thing that has always came true throughout my life is a phrase: "If you don't know why, its money."

Why there is no national healthcare - money, someone makes a ton of it by preventing nationalization of healthcare system.

Everything that doesn't make sense, has better alternatives (somehow not adopted) always boils down to money :(


Here is how I approach conferences:

1. Decide up front what sessions I am going to attend. All of these things should be stuff that I know very little about or take my knowledge from 200 to 300. Don't go to sessions that you know the answer to. Be super selective on sessions as well: skip vendor shills and affirmative action placements.

2. Look AND SCHEDULE casual between-session coffee meetings with folks I know are attending. This both deepens my relationships with them and you also tend to meet people most effectively this way that happen to be with that person.

3. Go to birds of a feather (if possible). People are much more likely to want to meet other people at these things.

4. Ask everyone you can "How their conference is going?" and "What are you working on in this space?". I make it a goal to ask at least ten people this a day between the coffee station lines, lunch, happy hour. At a great conference people love talking about what they are passionate about.

5. Drop out of late night events and drinking sessions if you are just there for the music or the beer and not getting knowledge or connections out of it. This is business, not party time.

If you can't do these things, yes, it is a waste of time.

But if you can, I find that I can leverage all of the info and contacts out of this relentlessly at my $DAY_JOB to get a lot done the rest of the year.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: