Was about to say the exact same thing. Housing constraints are not limited to "near Nvidia HQ". It's the entire Bay Area.
For those not from the area, it's city after city after city in the Bay Area. 60 miles up the peninsula (from San Jose to San Francisco), and 60 miles up the East Bay, the vast majority of which has inventory and affordability challenges.
4. The integration with Prisma is buttery smooth. In your UI code you call a function that has the database access. Blitz converts the function calls into API calls and runs your database code on the server. When Prisma is not enough you can drop to `db.$queryRaw` to write SQL directly.
5. Personally, I dislike fiddling with 800 libraries to get a project up and running. Blitz includes Jest, Prettier, and everything else you need so on so that you're productive from day 1.
One request I would have for SUS is to add the ability to add a weekly update in the past.
More than once I was working late Sunday and well into Monday morning. When this happens I can no longer provide an update for the prior week so it looks like I've done nothing for that week.
Yeah, this makes sense as a feature -- we didn't build it out originally because we were worried about founders gaming it to back-date updates and earn their "streak"/certification without sending in 8 weeks of updates. But there's probably a middle ground that lets you backdate updates at least for a period of time after the deadline.
Tenable | SF / Bay Area | ONSITE or REMOTE | Sr. Software Engineer and Principal Engineer positions open
Tenable is a rapidly growing network security company. We’re expanding our engineering team to keep up with our rapid customer and revenue growth.
Tenable.io is the first comprehensive cyber exposure platform. You will be joining a team that is building our data management capabilities, streaming data pipelines using Kafka, ingest APIs, and various other parts of our data infrastructure. This is a great opportunity to have a significant impact.
In terms of skills, you should be able to develop, deploy and maintain a microservice written in Java/Kotlin or another JVM language, experience with databases (PG, C*, Elastic, etc), and expertise managing data at scale. If you already know Kafka that's a plus. AWS, GCP or Azure experience is needed.
I'm the Sr. Director of Engineering, Data Services. You can reach me at aahmed @ tenable.com. Please put "HN" in the subject line.
Tenable | SF / Bay Area | ONSITE or REMOTE Software Engineer, Principal Engineer
Tenable is a rapidly growing network security company. We’re expanding our engineering team to keep up with our rapid customer and revenue growth.
Tenable.io is the first comprehensive cyber exposure platform. You will be joining a team that is building a streaming data pipeline using Kafka and Java/Kotlin. We are also building connectors to 3rd party applications using a FaaS (aka serverless). This is a great opportunity to have a significant impact.
In terms of skills, you should be able to develop, deploy and maintain a microservice written in Java/Kotlin or another language that runs on the JVM, that exposes a REST API, calls other REST APIs, parses and produces JSON, reads/writes to/from a data store (PG/Aurora...). It's a plus if you understand how to instrument your code (telemetry, logs, etc.), and ideally understand retries with backoffs, ideally circuit breakers, etc. If you already know Kafka that's a plus. AWS, GCP or Azure experience is needed.
I'm the Sr. Director of Engineering, Data Services (Data Science and Data Engineering). You can reach me at aahmed @ tenable.com. Please put "HN" in the subject line.
Tenable | SF / Bay Area | ONSITE or REMOTE Software Engineer, Sr. Software Engineer, Principal Engineer, Engineering Manager
We're hiring at all skill levels.
Tenable is a rapidly growing network security company. We’re expanding our engineering team to keep up with our rapid customer and revenue growth.
Tenable.io is the first comprehensive cyber exposure platform. You will be joining a team that is building a streaming data pipeline using Kafka and Java/Kotlin. This is a great opportunity to have a significant impact.
In terms of skills, you should be able to develop, deploy and maintain a microservice written in Java/Kotlin or another language that runs on the JVM, that exposes a REST API, calls other REST APIs, parses and produces JSON, reads/writes to/from a data store (PG/Aurora, or whatever). If you're applying to one of the more senior roles, then you should also understand how to instrument your code (telemetry, logs, etc.), and ideally understand retries with backoffs, ideally circuit breakers, etc. If you already know Kafka that's a plus. AWS, GCP or Azure experience is needed.
Interview process
We interview quickly. Our goal is 5-10 business days for the entire process.
- Cultural fit with Director Engineering (i.e. me)
- Technical interview with 2 peers
- Coding challenge (practical exercise similar to what you'll actually do)
- Decision
I'm the Director of Engineering, Ingest and Pipelines. You can reach me at aahmed @ tenable.com. Please put "HN" in the subject line.
Tenable | SF / Bay Area | ONSITE or REMOTE
Software Engineer, Sr. Software Engineer, Principal Engineer, Engineering Manager
We're hiring at all skill levels.
Tenable is a rapidly growing network security company. We’re expanding our engineering team to keep up with our rapid customer and revenue growth.
Tenable.io is the first comprehensive cyber exposure platform. You will be joining a team that is building a streaming data pipeline using Kafka and Java/Kotlin. This is a great opportunity to have a significant impact.
In terms of skills, you should be able to develop, deploy and maintain a microservice written in Java/Kotlin or another language that runs on the JVM, that exposes a REST API, calls other REST APIs, parses and produces JSON, reads/writes to/from a data store (PG/Aurora, or whatever). If you're applying to one of the more senior roles, then you should also understand how to instrument your code (telemetry, logs, etc.), and ideally understand retries with backoffs, ideally circuit breakers, etc. If you already know Kafka that's a plus. AWS, GCP or Azure experience is needed.
Interview process
We interview quickly. Our goal is 5-10 business days for the entire process.
- Cultural fit with Director Engineering (i.e. me)
- Technical interview with 2 peers
- Coding challenge (practical exercise similar to what you'll actually do)
- Decision
I'm the Director of Engineering, Ingest and Pipelines. You can reach me at aahmed @ tenable.com. Please put "HN" in the subject line.
Amazon is a huge threat for any infrastructure company using Apache 2.0. If you gain popularity, then Amazon will be a direct competitor once they host your project. Given that Amazon's services benefit from the IE effect, it's not irrational for an open source infrastructure company to eliminate such a threat via licensing.
It's comparable to LevelDB and RocksDB. However, Badger is not comparable to Cassandra, ScyllaDB or DynamoDB as they are distributed database management systems.
Cassandra, for example, has it's own storage engine that's responsible for writing bits to disk.
Redis is a database management system (DBMS). Badger is a storage engine.
An application developer would not choose Badger, but instead would pick a DBMS such as Redis.
A database engineer would use Badger to develop a DBMS that the application engineer could use. If the database engineer so chooses they could expose a Redis compatible API.
For those not from the area, it's city after city after city in the Bay Area. 60 miles up the peninsula (from San Jose to San Francisco), and 60 miles up the East Bay, the vast majority of which has inventory and affordability challenges.