Cool batch, particularly LemonBox which provides personalized vitamin packs to buyers in China. There's a huge user base right there, and I imagine the U.S. market is largely saturated / competitive already.
I must respectfully express my concern over: "Grabb-it Inc. turns rideshare cars into digital billboards." There's probably a market for it as online advertisers face an uncertain future with possible regulation, and at least scrutiny, of social media companies, but is this really the future we want to build? Where the only goal of some of these companies is to unrelentingly cover every square inch of the world with ads? Surely there are lawsuits waiting to happen when drivers, distracted from these eye-catching ads on all the cars around them, kill people.
I am worried that our public spaces are turning increasingly hostile to our citizens. Won't this trend continue to make the public space more unpalatable?
All we need is a YC-backed AR startup that removes ads from your personal visual space. Although it might be difficult to get that tech safe enough to be used while driving.
Why? This hits at the heart of startup porn. It's about money. It's only about money. It's only ever been about money. This is a good way to make money so it exists. I'm happy for startups that don't hide behind their capitalist mission.
> It’s meant to only run when the driver is between rides. Once a passenger hops in the car, the projector is shut off — because, well, no one wants a projector blasting light in their face on the way to their next meeting.
So, they intend to use it as a means for drivers to make money when they are not in the middle of a trip.
I agree. I think DMV should ban this if it ever happens. I imagine it would feel like in Times Square everywhere. There is a reason why tail lights are red and not replaced with displays that show ads.
So is LemonBox basically a daigou in startup form? I’m not sure what their advantage will be vs. the very numerous amount of small players that already do this.
I also wonder if they misunderstand the vitamin market in China. Whereas it is mostly bros looking to buff up in the states, in China, you buy them abroad to show your elderly parents/grandparents you care for them. But maybe my views are outdated?
Are vitamins actually useful? The vast majority of people who eat normally don’t need vitamin supplements. It’s all mostly scammy to me: selling people a product that has almost no value unless you are malnourished. It’s big business sure, but it’s on the level of homeopathy for most people. Providing vitamins to undernourished kids might be valuable, but there isn’t any money in that. Vitamins are right up there with “organic” in terms of quantifiable health benefits.
There are multi-vitamins and then there are supplements. Personally, I find the former useless but there are many supplements that are useful for various ailments. For example, melatonin if you find it hard to fall asleep, DGL licorice and probiotics if you got digestive issues. Some people swear by certain B-vitamins for acne as well. Different supplements work for different people.
Optic [0] caught my eye. They seem to have made possible a new kind of abstraction in code. Something that's different from both functions and macros. It's a kind of abstraction that can be arbitrarily customized at each call site, yet retain its identity across all of them.
Example:
foo = do_a()
bar = do_b(foo)
Example 2:
foo = do_a()
bar = do_z(foo)
baz = do_b(bar)
Example 3:
foo = do_a()
bar = foo + 10
baz = do_b(bar)
The three examples have strong similarity in their first and last statements. There is a pattern there, but that pattern cannot be abstracted into a single macro or a function. So this pattern does exist, and is recognizable to the human eye, but the language does not allow one to express it.
What Optic seems to do is to recognize the pattern, create a single model out of it, and allow you to re-use that pattern elsewhere, or even transform it into new ones and re-use those new patterns elsewhere.
Again, this would be a new kind of abstraction. One whose leaks are easier to fix. You can have your cake and eat it too!
You're definitely understanding the premise of our parser. It evaluates a regex-like set of rules on an AST tree to match different forms of code. These rules can be recursive as you can see in the first example of our home page [0]. In that example Optic matches an express js route and the headers, parameters and responses inside it. The result is a nice json object with shape {method, url, parameters: [], headers: [], responses" []}. OOTB the JS interpreter couldn't do this because it doesn't encounter the calls to req.query.param_name until that code runs.
PG wrote about building the language/abstraction to fit your problem [1]. In an ideal world we all would do this, but in practice there's always a gap between our program and the abstraction we describe it in. Today that gap is only bridged by a human understanding the code. Until now...Optic is allowing us to programmatically deal with these implicit abstractions.
We believe most of the dev tools created over the next decade will be built on top of your code in a way that allows them to collaborate with real developers. To realize this world we need a programatic interface to read, generate and mutate code so we created Optic.
I was really fascinated by this as well. I think it's a great idea. Where does the 15 hours/per developer/per week come from. Couldn't find more information about that anywhere on your website.
Before I started Optic I watched 20 developers code for 1 day each. I kept logs of what they were working on and asked people to rate how they felt throughout the process. The 15/hrs/wk is a rough estimate of what we could automate. Our most avid users over the last month self report a little less but I think once we add better support for a few more things we’ll catch up.
I never thought to publish those results. Sounds like something people would be interested in?
can someone ELI5 this? I am a former SW engineer, but havent read through code in a couple of years, so it can be a technical explanation. I just don't understand what is different about this than regular functions as first class citizens? I tried to get it but maybe there is an easier explanation for laymen-ish people.
You're right. My examples should've been more complicated for regular functions not to be a suitable abstraction. Now, assume that the examples are complicated. In that case:
One method to define that function would be to do it the way explained in this [0] comment. That is, express the solution with small, composable functions. This would work as long as you can express the program in a composable way. This is what I usually try to do first, unless I find expressing my solution with composable pieces is not worth the effort.
For example, an async solution to a problem would be very difficult to express composably without well thought-out building blocks. One such collection of building block is Reactive Extensions [1], which is the result of years of research and development. So, before RX and similar libraries existed, it was costly to express async programs composably. That cost may or may not have been worth it.
Another method to define that function, if we're not going for "composable," would be to write it with an `options` argument, which would look something like:
type Options = {use_cache: boolean, skip_one_cycle: boolean, number_of_retries: number, ...}
The more options one function has, the lengthier its definition would get, which means its core logic could be obfuscated. So that'd be the tradeoff with this method.
Third method would be to define multiple variants of that function, like `do_things()` and `do_things_with_cache()` and `do_things_and_retry()`, etc. Here, each function is simpler than a single function taking an `options` arg, but then core logic would be repeated in all of them.
And since these three methods are not binary options, one may even use all three of them to some extent.
What Optic seems to provide is a fourth option which allows you to forgo defining that function (as long as that makes sense) and repeat your core logic in many different parts of your code. The devtool would then recognize the core logic in all those different repetitions and assign it a single identity, which would allow you to manage that core logic from a single place. This would be especially useful in a language that does not support macros.
(Btw, macros would be your fourth option if you're coding in lisp/rust/etc. The one tradeoff with macros is that they are yet another layer of abstraction that the author and the reader of the code should keep in their head.)
function baz(modifier) {
let foo = do_a();
let bar = modifier(foo);
return do_b(bar);
}
Example 1: baz((foo) => foo);
Example 2: baz((foo) => do_z(foo));
Example 3: baz((foo) => foo + 10);
I think companies like Grabb-It Labs only make the world more hellish. I really don't understand why you would start a company like this. On my commute each day they've started video advertising on my local bus and also in the subway. It just really SUCKS to be bombarded with more shitty ads for banks and big box stores.
I'm not trying to scold the founders. It just confounds me that people would start companies like this when there's clearly no net value to social capital.
The CSPA thing is interesting to me because the founder/CEO was formerly the head of engineering at Crunchyroll, and interviewing at Crunchyroll was one of the worst interviewing experiences I've ever had.
Let's just say I'm skeptical. I hope it works out, but...
MacD looks interesting... but I'm kinda surprised YC would invest. A Mac n Cheese company? Very strange.
I'm very excited about the biotech space, so looking forward to seeing how that shakes out.
Sorry to hear it was that bad! :( I'd love to hear your experience, especially if I was directly involved -- you can reply or email me anonymously at james@cspa.io. If you know anything about Crunchyroll's early culture, I always made a point to listen to feedback and improve ourselves. It'll be the same at CSPA.
Crunchyroll's interview process definitely was not consistent, which is why I spent my final two years there trying to improve our processes. I learned a lot from it, and that's why we're doing CSPA :)
I have no idea if you were directly involved, unfortunately, which was part of the problem. I described my experience in another comment: https://news.ycombinator.com/item?id=17795308
I thought the coding challenge was fun. They basically give you a URL, and the URL contains a list of numbers, which you append to the URL. Visiting each URL+number returns either a list of other numbers (which you keep traversing), a FAIL message (dead-end, of which there are many), or a SUCCESS message (only one). You task is to traverse the entire tree, determine how many dead-ends there are, which number is the successful end, and which path is the shortest path from your starting node to the success node. It was pretty fun, and the solution wound up being basically a breadth-first search or a depth-first search (pros and cons for each).
I submitted my answer and got set up with a phone call. I don't remember anyone's names, but this guy was touted as a PhD and a genius, so I was excited to talk to him. Over the phone, he gave me another challenge: "Imagine you're Photoshop, and someone selects a pixel with the Fill tool. How do you know which pixels to fill?" As you can imagine, this turned out to be another breadth-/depth-first search problem. You look at the surrounding pixels, decide whether they're the same color or not, and then proceed to the next level and repeat.
I was offered an on-site, which I accepted. I arrived and sat in the lobby. They have a giant big-screen TV that was showing anime. Makes sense, since CrunchyRoll is an anime streaming service. The current show had a woman eating in a restaurant with some men. I couldn't understand the words, but it was clear that she was having orgasms while eating, and the men were watching. At the time it was mildly amusing, but in retrospect, it was quite inappropriate.
Eventually, I was shown to a room with a whiteboard. The first person (an engineer) comes in, says "Hi, I'm Jack. So you have M trains and N stations..." and starts writing on the board. No idea who this guy is or why he's talking to me. I, being nervous, tried to follow along. His question was ultimately another search problem. Due to poor time management, I didn't have time for questions. He nodded and walked out to get the next person.
Second engineer comes in. "Hi, I'm Brad. You have M dogs and N cats..." and jumped right into his problem. No context, no anything. Yet another search problem. At the end, there was no time for questions.
Third guy comes in, same thing.
Fourth guy comes in, he actually introduces himself. He's the head of Product (or at least some Product Manager). He asks me a couple of questions, then gives me a coding challenge on the whiteboard. Another search problem.
Fifth guy comes in, the PhD guy comes in, says "Hi, I'm Kevin. You have..." and launches into yet another search problem (again!). Having done six of them by this point, I breezed through it and finally had time for one question. I said, "You guys must do search problems a lot here at CrunchyRoll."
He cocks his head to the side, looks at me like I'm crazy, and says, "No, never."
The sixth guy comes in and gives me something different! "You have a box and a bunch of random objects. What's the most efficient way to pack it?" which, of course, is the Knapsack Problem, which I believe has not been fully solved. But I do my best.
At this point, I'm over it.
I know CrunchyRoll does not fill boxes. I know they don't do any of the things they're quizzing me on. I don't know anything about the business other than what I've read online. I don't know what any of these people do. I haven't been given any context about anything, and when I did manage to ask some questions, I got nonsensical answers. I don't know which team I'd be working on or what role I'd take over. I don't know which of these alleged geniuses would be my coworkers (or god forbid, my boss). I don't know what problems CrunchyRoll's engineering team is trying to solve. I don't know their tech stack, their culture, or their company strategy.
I was not offered a position, but even if I had been, I would have turned it down. It was pretty bad across the board.
Ironically, the next company I worked for did some work on behalf of CrunchyRoll in the data engineering space. Thought that was funny.
I had a similar experience at Zenefits the next day, but halfway through the interview I declined to proceed further once I realized it was the same thing.
On the positive side, I've interviewed with a dozen other companies in the intervening years, and most of them have been fairly positive.
>The current show had a woman eating in a restaurant with some men... At the time it was mildly amusing, but in retrospect, it was quite inappropriate.
Probably was Shokugeki no Soma
The first few rounds don't really seem too dissimilar from other firms (in that they'd ask a bunch of random algorithmic problems that you will never likely come across in your job). That being said, six consecutive rounds of interview with these sorts of problems does indeed seem a bit too much.
It wasn't necessarily six consecutive rounds with "these sorts of problems", it was six consecutive rounds of _the same problem_ just worded differently. And the actual questions were less the terrible part as the fact that interviews work both ways. An interview is a chance for _me_ to learn about _you_ just as much as it is for _you_ to learn about _me_. A very good software engineer in a city with very high demand for software engineers has a lot of possibilities. CrunchyRoll is not the only person asking engineers to spend their entire day in the office. You can't treat an interview as an obstacle course that the candidate has to overcome.
Crunchyroll showed zero interest in learning anything about me other than whether or not I can search a tree. They showed zero interest in letting me learn anything about the company other than I don't want to work there. They showed zero ability to collaborate and coordinate prior to the interview. They showed no interest in me personally as a potential team member. They showed zero ability to recognize that I gave up an entire day (and likely lied to my boss about why I was missing work that day) to come into their office and... waste my time? Yeah, not great.
Conversely, I've been involved with dozens of interviews at other companies that ask algorithmic questions that were pleasant, interesting, and an actual two-way street. There were plenty of companies whose interviewers seemed to actually know my name, and who spent time trying to convince me that I _should_ work there.
IMO, if you bring candidates on-site and you're only asking them algorithmic questions, your recruitment funnel has failed. You shouldn't bring anyone into your office unless you're pretty sure you're going to hire them. That means determining whether or not they're competent _before_ they come in. The on-site should be a validation of what you already know (e.g., verify they actually did the coding challenge you sent them and that they're the same person from the phone call(s)), a confirmation that their temperament and personality are compatible, and then selling them on joining the company.
To be clear, I don't have a problem with algorithmic questions, but more than one or two is a waste of time.
And yes, this pattern is sadly common in SF, but common does not make it okay.
It sounds like this happened after I left Crunchyroll in 2015. The process I had in place was definitely not like that :/
Well FWIW, I personally wrote the coding exercise that you enjoyed (http://www.crunchyroll.com/tech-challenge/roaming-math/yourn...). I thought it was a good mix of basic algorithms (tree traversal) as well as some practical knowledge (CURL/http). I'm surprised they are still using the same one :)
My interview was in June 2015. Maybe you had left by then. It did seem disorganized.
I did really enjoy that coding challenge, though. Good job with it. It is the one thing I did like about the whole experience, and I still share it with people when they ask about what a good coding challenge might be. Tree traversal isn't particularly useful in web dev world, but it's a pretty intuitive task, so I think it's gives good insight into whether a programmer can reason logically about a task.
I'd rather see real-world challenges, though. Sometimes that's hard, though, depending on the company.
CSPA [1] sounded interesting, and is something similar to what I’ve been wanting to do, but after looking at the sample exam, I was pretty disappointed.
I thought it would be about computer science and engineering fundamentals. It has some of that (e.g., representation of integers, memory access speeds, etc.) but I also noticed it’s full of completely inessential things, like escaping XHTML, identifying valid JSON payloads in HTTP, and alignment rules in CSS. There was heaps of JavaScript, HTML, CSS, SQL, and drawn out shell exercises.
To be clear, I’m not claiming that’s all useless knowledge. It’s not! But, as a hiring manager, I’d rather look for strong critical thinking skills and strong foundational knowledge. Remembering that keys in JSON must be quoted strings is not at all a demonstration of that.
The exam seems to be more of a “well rounded full-stack engineer” rather than what they suggest.
I would hope that the scoring differentiates between core CS knowledge and skills vs. details and trivia of web development, as well as any other categories of questions. That way, the exam is useful for multiple hiring philosophies, and the hiring manager can decide how much they care about specific types of questions. After all, different roles at different companies have different needs, and I'd hate to have a biased scoring algorithm take those decisions away from a hiring process.
Excellent points -- this is something we discuss extensively. After talking with so many employers, we found that each of them look for something different.
Adaptive scoring algorithms and ways to let employers create custom projections of our multidimensional data is something we're thinking about. If you have ideas on how to build this, I'd love to hear them! james@cspa.io
The issue you're having is that you're trying to test over 7 dimensions, with the majority of those dimensions being irrelevant to an individual taking the test to signal mastery of 2-3 of them.
The standard method in testing to deal with issues of this nature is to provide people with the opportunity to test themselves on modules, rather than taking the omnibus test so that they can spend the majority of it signalling that they aren't good at things they don't care about. If they're good at everything, they can sit for all of the modules. If they aren't, they aren't wasting their time.
Additionally, this frees candidate time up to let you test people in more depth in the areas they're actually professing skill in. From that point, you're open to do interesting things; evaluate question difficulty/time to completion and adjust the questions provided as the test unfolds to match the expected skill of the applicant for instance.
As it is now, you're getting buy-in from corps that little have no skin in the game, but I don't see how the sample test you have provides more signalling value than an applicant having a CS degree or a list of github projects they can discuss.
Re: custom modules. One (arguably) interesting thing we do is to only take the MAXIMUM subscore of all 7 topics and use that component in the composite score, ignoring the other 6 topics. This allows candidates to focus on only certain topics if they want. Employers want someone who is strong in one area, rather than a jack-of-all trades and master of none.
The algo to take Core + the best other section, right? As it stands, the test difficulty makes it pretty easy to hit 2 perfect scores, so you'll be competing against those pretty consistently.
How does the CSPA help my interviewing process if I'm a network guru and knock all those questions out of the park, but apply for a ML job where my knowledge is paltry? How does my CSPA testing help me differentiate from the above guy as a network guru when I also aced the ML section? This seems like an obvious area the test can outperform other testing metrics.
As far as I can tell, the only value for the current scoring system is to weed out people who are just plain abysmal at everything. Eventually people are going to game your exam and pop up very close analogues onto google. At that point even the floor function is dead with the added deadweight cost of the exam still being an application requirement at BigCo.
Maybe I'm just blinded because I think having accurate skill radar charts that test takers and employers could use for self improvement and prospect evaluation, respectively, could be an absurdly large value add.
Yep I agree! In fact, about 50% of the people who take the CSPA, do it more for self-evaluation than for employment purposes.
You're right, so far most of the test takers are entry-level or changing careers. To accurately assess specialities like ops/IT, we'll need separate, dedicated subject tests (or adaptive tests).
That said, no one has yet gotten a perfect 1600 :) . The highest is something like 1380.
I concur with this sentiment. I was just doing the hiring loop and came to the conclusion that taking CSPA would have been a total waste of my time. I don’t know how many companies would value the data after looking at the example questions, but I almost find it to be a good signal that I shouldn’t apply to companies that use it.
A standardized assessment is literally the farthest thing from what happens in real world programming. In contrast, the Triplebyte interview questions focus heavily on what an engineer actually does day to day. When I completed Triplebyte, I was pleasantly surprised at how well orientated the assessment was. CSPA just looked plain insane in comparison.
Even the time commitments are a good contrast. Triplebyte’s commitment on the quiz side is about 30 minutes, and the final interview is a couple hours. CSPA is five hours.
Their topics page https://cspa.io/topics also has numerous duplicate "example" links. Some of them are barely related to the sub-topic line item and some not at all (perhaps a typo? copy-pasto?).
I provided feedback that included this issue, I believe around when it was first a "Show HN" post, but the page was never updated.
That would certainly give me pause as to the accuracy of a technical (or detail-orientation) assessment.
We're constantly improving the CSPA questions, and our latest iteration has quite a few changes. Some of the examples you mentioned assess how detail-oriented the candidates are. We're interested in how this data correlates with how bug-prone someone is.
That said, we trimmed some of those down in our latest version and replaced it with more substantial questions.
I found CSPA to be an interesting proposal. I'm currently studying CS and Biology and found that the comparison of the exam to the "SAT" of computer science is a bit of a misnomer. The SAT requires minimal prior knowledge to take (albeit if you want to score well, you end up needing to study for it through test prep books etc). I think perhaps it may be more likened to an MCAT equivalent (or any professional school exam equivalent) because of the need to know about certain subject fields.
Someone also brought up a point regarding the validity of testing for skills in web development if test takers won't end up in those fields anyway. However, when looking at similar exams (e.g. MCAT), you can make the argument that the majority of doctors won't use Organic Chemistry or won't need to know about the minutiae of Psychology & Sociology to be successful. The MCAT becomes a successful discriminator of good vs bad test takers because of the amount of work and critical thinking skills that is needed to cover the litany of subjects tested. Similarly, the CSPA could follow a similar pattern as the MCAT by testing a range of subjects as a means of measuring one's ability to learn a large sample of topics.
The only thing I can really think of that CSPA can improve upon is providing more preparatory materials. Additionally, it'd be useful to see how people end up performing on the exam and if there is an actual "Bell Curve" like distribution of scores that results.
I have to admit that I'm missing the take-over-the-world/crazy-moonshot kind of company among the entries. I appreciate that Y-Combinator is a commercial institution with commercial goals and that personalized daily vitamin packs, better SaaS conversion and AI-powered programming are all sensible iterations on existing themes - but wouldn't there have been space for at least one next-gen Googe/Space X/Hyperloop sort of venture? Surely someone applied that'd fit that bill.
Looking at previous, recent batches, you'd be hard pressed to find startups that look seemingly undoable/like a joke (which are usually the ones with huge potential). For me, only Boom and a couple biotech companies make the list.
They're not doing fake reviews. They're choosing which user reviews to show depending on the audience (e.g if the person visiting the website is from a company in the hotel industry, the website might show testimonials from other hotel companies like Hilton as opposed to software startups)
I interviewed, I've talked to several other robotics companies that interviewed for S18. Absolutely dumbfounded that none of them made it, some of these in the batch seem like they could never be worth $10b.
Sorry no, I'd have to dig up my notes from the interview and I'm slammed working on my startup. You don't wanna move to the west coast? You like bicycles and robots perchance?
Yeah, it is always disappointing that there aren't more startups around doing hardware. I think a large portion of that is that hardware is quite expensive to develop and requires a lot of space. From the YC perspective it doesn't really work with the 3 month rapid iteration schedule, that is barely enough time to get tooled prototypes.
Nice batch. I think the concept federacy is tackling has huge potential. We lose a lot of sleep thinking about security and I like the idea of being able to tap a hive mind to help us plug security gaps our team is unaware of (and we would pay a lot for it).
Appreciate the kind words! Reaching out now -- would love to get you into our beta and work from any of your feedback. Finsight seems like a perfect use case and we're excited to dig in.
MAC'D sounds like an interesting idea. If some of the options were healthy (perhaps you could pair mac'n'cheese with tomato soup) I could definitely see myself eating here.
> Synvivia develops biomolecular ON-OFF switches to make synthetic biology safe for use outside of the laboratory.
Yep...then the on-off switch jumps species and the only way to survive is to consume a particular (patent pending) brand of parsnip which can't be grown from seed.
Pretty much the only place I could reasonably be accused of being a luddite is in releasing GM organisms into the wild.
I definitely agree about the wild. But what about designing gut bacteria that target parasites in your poop? Amongst many other possibilities.
You can use oxygen, temperature, and light-sensitive gene promoters and toxin-antitoxin plasmids to make ON-OFF switches that would hobble bacterial replication in the wild.
It's a tough problem, and my guess is you'd want to use several of these switches together. There's no guarantee that stressed bacteria will 'run' your programs. And you're right that the gene could move horizontally, so you'd have to be very clever with your kill switches, and keep up with evolution.
Apart from the last resort of antibiotics, you can also selectively deplete particular bacteria from your poop using adhesion inhibitors such as mannosides and galactosides (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5654549/ - research from my lab).
The downside to Web MD might be that any charlatan may pretend to be a doctor.
When you've got the answers for common diseases from the doctor who gets them from an app, the doctor himself is a repleacable middleman and can be anybody with enough confidence, even if faked.
"Customers pick a cheese sauce, a pasta base, add unlimited toppings like roasted broccoli and mushrooms, and top it off with anything from truffle oil to Hot Cheetos." - So basically Noodles and Co.
I'm not an insider on the terminology, but it seems a "wet lab" is what a lay person, like myself, thinks of when we hear "a lab" (in the physical sciences context).
What's the "dry" (or is the term of art different?) lab situation you wish it would be useful for?
There could be a different startup opportunity, if it's scalably computer-heavy [1]. I'm aware of computer-controlled instruments for wet labs being a very underserved market (and presumably with high margins for the computer portion), but that seems like a consulting/services/labor-intensive business.
[1] Around HN, that usually means software, but my background is Ops, so I actually consider commodity hardware quite scalable, too, despite the hardware-is-a-nightmare-we-have-to-do-cloud mythology.
Yeah, "wet lab" is what most people thing of as a "lab". Some fields (lots of the biology and biology-adjacent ones) use "lab" as a logical unit of organization. So Professor So-and-So and their graduate students, postdocs, etc. is "The So-and-So Lab".
"Wet lab" is useful to distinguish people with actual labs vs. my lab, which is just a bunch of people with laptops.
The situation I'm dreaming of, is, for example, it not being my problem when a grad student is having trouble getting software installed on the server. That would be something I'd love a lab manager or dedicated programmer to deal with, but for a small lab, I don't have nearly that level of funding.
> my lab, which is just a bunch of people with laptops.
OK, so it's something akin to Wikipedia's description of "dry lab".
> The situation I'm dreaming of, is, for example, it not being my problem when a grad student is having trouble getting software installed on the server. That would be something I'd love a lab manager or dedicated programmer to deal with
This sounds very much like IT/Helpdesk support, of the kind a programmer or a lab manager typically wouldn't want (and might not be qualified) to deal with, either.
Is there something unique to your lab environment that you couldn't use some kind of shared/general IT helpdesk service, perhaps focused on scientific software users?
If not, there may well be an unmet need that a startup could fill, especially if everyone in each lab needs identical support and they can combine it with some kind of cloud-based service with basics pre-installed. However, if you're looking for unlimited, individualized user support for a limited, flat fee, that's not sustainable.
> I don't have nearly that level of funding
For a service provider, the question then becomes, do you have enough funding for the amount of service you'd require (plus the provider's overhead/profit), even if it's not nearly enough for a full-time person (which can be remarkably expensive in popular tech hubs)?
Reasonable IT support to user ratios are somewhere in the 1:20-1:100 range, which is the same range (for a 3-person lab) as HappiLab's pricing.
"This sounds very much like IT/Helpdesk support, of the kind a programmer or a lab manager typically wouldn't want (and might not be qualified) to deal with, either." - Not if they were hired with that qualification in mind.
"Is there something unique to your lab environment that you couldn't use some kind of shared/general IT helpdesk service, perhaps focused on scientific software users?" - It's more there are parts of HappiLab that I could see using (managing grant budgets, which was...an unpleasant part of my week last week), etc. but my pain points are slightly different from those of a wet lab, but still within the realm of "I'd prefer someone familiar with academia and research rather than a general purpose group."
> Not if they were hired with that qualification in mind.
That's a big "if". That's why I said "typically". I think you'd find it difficult, if not impossible, to hire someone like that, even though they exist.
This is a conceit I often see in job postings, of listing skills that belong to two (or more!) different specialties. It seems more common in cases where the goal seems to be to have two specialists for the price of one [1].
> "I'd prefer someone familiar with academia and research rather than a general purpose group."
I'm pretty sure "prefer" rather than "urgently need" isn't compelling enough to give a startup a enough of a competitive advantage. There's also not much (if any) synergy with any of the rest of HappiLab's core competencies to make sense as an addon for them.
IT services could be just another vendor HappiLab handles.
[1] Uncharitably, two full-time experts for one salary, but, charitably, merely two half-time experts in one person, which seems likely for startups and other cash-strapped groups.
"I'm pretty sure "prefer" rather than "urgently need" isn't compelling enough to give a startup a enough of a competitive advantage. There's also not much (if any) synergy with any of the rest of HappiLab's core competencies to make sense as an addon for them.
IT services could be just another vendor HappiLab handles."
Potentially. I think one of the things that will become more important - and will synergize - is the number of wet/dry labs. There are an awful lot of new computational servers being bought by non-tech savvy PIs.
I'd hope that's not just viewed as an opportunity exploit information assymetry by tech/IT services firms.
I do believe, however, that scientific computing needs are similar enough to general computing needs that this won't be widespread (or at least not for long). It would also mean that it would make sense for a lab services provider to offer generic IT except as a pass-through for convenience (subject to cost-saving disintermediation).
IOW, I doubt you're that special, but that's a good thing!
It's seems very interesting, but to be honest after watching the demo video, most of the thing "automated" (I don't see any AI here) are related to poor practices...
Some other features seems interesting, let's see how the product evolve.
Hey, founder here, thanks for checking us out. The demo video is a couple months old and since then we've gotten much closer to figuring out the best use cases after talking to users. Do you have any suggestions on the use cases you'd like to see the 'interesting features' put towards?
This is definitely the kind of product that takes several iterations to get right so thank you for following our journey.
I must respectfully express my concern over: "Grabb-it Inc. turns rideshare cars into digital billboards." There's probably a market for it as online advertisers face an uncertain future with possible regulation, and at least scrutiny, of social media companies, but is this really the future we want to build? Where the only goal of some of these companies is to unrelentingly cover every square inch of the world with ads? Surely there are lawsuits waiting to happen when drivers, distracted from these eye-catching ads on all the cars around them, kill people.
I am worried that our public spaces are turning increasingly hostile to our citizens. Won't this trend continue to make the public space more unpalatable?