Hacker News new | past | comments | ask | show | jobs | submit login
Lessons from YC AI Startups (ignorance.ai)
137 points by charlierguo on Sept 13, 2023 | hide | past | favorite | 93 comments



Most of my dev work is in logistics for a very staid industry which doesn't even make this list, that would have a difficult time finding a use for LLMs, because most of the day-to-day jobs involve manual labor. That said, I've been asked "how can AI help us" quite a bit (the unsaid ending to that sentence being, "...help us lay off workers"). Just as a thought experiment, I've considered the pros and cons of automating certain linguistic-heavy aspects of the business. My considered determination is that the penalty from a single fuckup by an LLM in any important scenario would dramatically outweigh all other potential savings.

This is probably why no one in a lot of sectors outside those that already provide white-label customer service are seriously considering implementing these things. There is no barrier to doing so from a financial or technical perspective, in fact there's every incentive to try it. Businesses like the one I'm in are just waiting to see how exactly the first movers will be dismembered.


An LLM based tool is very good for any case where 1) User is an SME 2) Generated output can be verified by user easily.

After that its just a gradual creep into LLM ops and madness. Speaking from the other side of that descent into madness.

As obvious as it may be, production LLM tools work on your data. You can't simply use an external benchmark to verify if your tool works for your use case. You will always have to build evaluation processes.

I'd say there are 2 type of tests you will end up running.

1) Statistical Tests - AKA good old ML. 2) Semantic Tests - Here be dragons.

Semantic tests break down further based on HOW you are using the LLM. (Categorization, Summarization)

The issue with Semantic testing is the amount of human effort. Its more akin to setting up exams and evaluating answers. Also your student may be tripping randomly.

Categorization - you can simplify it down to almost ML workflows. Summarization ? That takes effort to verify.


It really sounds like AI is a bubble if people ask how they can cram AI into their workflow rather than coming up with real problems they want to automate.

I suspect that a very large number of applications that could've been "normal" programs are going to be AI'd with no clear pros but many cons.


That's an unnecessarily dismissive take.

Let's assume for a moment that you're a non-technical 55-70yo manager or owner of a business -- perhaps someone like your mom, dad, uncle, or grandparents.

What they're really asking, "There's a whole lot of lip service right now about big leaps in AI capabilities. I've tried ChatGPT, and it's pretty cool. I have no idea what the limitations are, or how practical any given application is. You are technically inclined. Do you think any part of our business could benefit?"


I keep bringing this up, but this launch HN post about Roundtable: https://news.ycombinator.com/item?id=36865625

still shocks me. It's a company that uses AI to produce survey results. I'll let you read their pitch/description and decide for yourself, but I think it's very fair to say that this is a service to fabricate survey results to validate whatever idea it is you had beforehand. But even side-stepping that, they claim to have overcome bias in their datasets and refused to elaborate on 1) how they did that and 2) how they could prove that they did that.

As long as this community, which is far more technically sophisticated than the general public, isn't laughing companies like that out of the room, we're in serious trouble.

There was also this thread https://news.ycombinator.com/item?id=37259753 which was an individual's project to provide an AI therapist and while people here and there did mention the cons of having a program provide medical treatment, the overall sentiment wasn't at all negative.

I'm not even some AI luddite: I use and greatly benefit from some AI tools. But just like crypto, AI isn't the be-all-end-all technology. The difference is that where crypto is primarily a financial risk to people duped into using dubious-at-best-scams-at-worse products, AI will cause real, concrete harm, i.e., https://www.euronews.com/next/2023/03/31/man-ends-his-life-a...


> an individual's project to provide an AI therapist and while people here and there did mention the cons of having a program provide medical treatment, the overall sentiment wasn't at all negative.

Sorta. You're forgetting to compare it to an alternative. Compared to a licensed therapist that you can find, schedule, travel to and afford... for some people an imperfect something is better than the nothing they have now.


This is really the only part of the post where I too started pondering.

Wholeheartedly agree with the rest and am thankful for the link compilation.

IMHO, the thing with therapy for mental health problems is that it's mostly in such a sorry state that it's an insult to GPs point to call it a medical treatment. I only make that bold claim because of my own experience. As always, YMMV.

Doesn't make the AI thing less creepy, but I'd still carefully consider there's some real potential here.


For me, the idea of robots being necessary to provide needed therapy that otherwise would be unavailable is enough to induce existential despair. I really can't imagine it as a viable long term treatment. Much of the point of therapy is just having a human advocate who is invested in your wellbeing. AI therapy sends the message that human problems are not worthy of human attention.


I don't disagree. That said we still live in a world that has always known hunger and war, and those are even lower on maslow's hierarchy. So my point is simply that we are where we are and that anything that helps is worth doing rather than not doing because the more ideal alternative isn't happening.


> AI therapy sends the message that human problems are not worthy of human attention

Scratch 'AI therapy' and just put in capitalism and you have the answer to the problems we have.

Therapy is a job that people do to pay the bills after the 10th person they've seen that day.


>> Therapy is a job that people do to pay the bills after the 10th person they've seen that day.

You're contradicting yourself. Either therapists actually care, or they wouldn't do their job without capitalist incentives. Which is it?


You seem to only deal with black and white situations, reality rarely present clean cut scenarios like that.

For example I deal directly with customers quite often. The amount of "I give a fuck" is much higher at 8AM then it is at 4PM. To find people that can give out a continuous amount of caring at 40 hours a week is difficult. Then you tend to have some percentage of employees that "Don't give a fuck" at all and are there for a paycheck.

It turns out that if you take away that paycheck you'll find that 99% of your therapists and your engineers will go do something else. :)


Okay, so how would scrapping "capitalism" improve the availability of therapy?


First humans make errors too. Second, there's a lot of "generic" work flows that AI could enhance.

routing emails.

voice mail routing handling.

commenting code. easy enough to add something that strips comments and verifies the code is the same.

taking meeting notes.

going to google and pulling down notes from 10 websites.

writing a user guide for a module.

any kind of corporate-ese emails, memos etc.

generating test data.

I wouldn't allow this stuff to work with out human supervision for the most part, but this list of table types stuff seems to work pretty good and be worth time to ask it.


I'm not saying that AI hasn't some utility, I'm stating that the hype is gonna push for building AI'd applications that didn't need AI.

Few of the examples you make don't need ML at all (such as routing emails, voice mail, generating test data or pulling notes from 10 websites).


Have fun pulling notes from websites without ML. NLP just plain didn't work before ML. The thing is that with ML you are able to work with unstructured data, e.g. routing email without having a complicated system the person mailing you has to understand or hundreds of brittle, hand written rules that in the end won't fire anyway because somebody spelled a word wrong.

I think ML/AI is just not ready for a lot of applications because it's not good enough, but our models are still improving every month. Impossible to know what it's like in 5 years.


My 2 cents, this is like the early days of PCs or the internet. It's obvious to all that there is immense value, but it's unclear where that value is going to stick. Compaq ended up going bust after figuring out how to get around IBM bios licensing.

In hindsight, the advent of pc, mobile, and the internet were economically transformative. However PC vendors generally faired poorly, Mobile was dominated by a small number of behemoths - and the internet is dominated by giant consumer tech firms and a dizzying array of B2B firms.

It's pretty tough to pick out the winners today, I thought consumer plays were fools errands up until the phi-1.5/WebLLM papers. Now it's looking like we'll have GPT-3.5 like behavior in the browser, on common consumer hardware by year end.


Yup. Well said.

As for the cons, a nice way to put it would be that adding a black box into the workflow makes everything after it undefined behavior. And there's no shortage of C-Suite guys with itchy trigger feet.


It's been that way for years, but with data-science and machine learning.


In logistics, I would suggest LLMs as an assistant to workers. The worker would still make the judgements, but an LLM could fill out forms, suggest alternatives, remind of unfinished tasks, or help look up information such as the right contact person at customer or vendor Xs. These features mostly reduce tedium, but keep judgement in the hands of the person

Future LLMs should gradually become more powerful, and any work on such an assistant today will be good preparation for more powerful assistants

Vertical specialties like logistics are in fact the BEST place to use LLMs, quoting from the article:

"The most interesting (and likely valuable) companies are the ones that take boring industries and find non-obvious use cases for AI. In those cases, the key is having a team that can effectively distribute a product to users, with or without AI"


> My considered determination is that the penalty from a single fuckup by an LLM in any important scenario would dramatically outweigh all other potential savings.

Attention all driverless car companies: hire this guy and make him your CEO.



I am in computational chemistry which has some crossover with Materials Science. AI is useful for speeding up simulations, doing molecular interaction predictions, doing property predictions, doing molecular docking. LLMs are helpful as research assistants and helping design and run experiments. https://atomictessellator.com/

For example, my latest module (I am putting the finishing touches on this as we speak) uses LLMs to review catalysis literature and then summarize that and control another coding LLM that has been trained to run the simulation tools I created to try and reproduce the works in the papers. Yes, it works, the first catalyst discoveries were made just a few days ago.


Kind of mind blowing. A light in the tunnel for the reproduction crisis?

What specifically have you trained your coding LLM on? Is it lora or something more advanced? Have you created a corpus by hand specifically for training?


Yes, created by hand, lots of techniques are required to get a well running system, GoT (graph of thought) RAG (retrieval augmented generation), Ko detection, dynamic problem decomposition, and a few more techniques I have invented but dont really have names for. Its also quite a bit more complicated than the simplistic answer I gave before because you have to do things like experiments in abstraction laddering to get good interface composability.

This space is moving so quick I run many experiments every day.

I would love to work on this full time, I applied to y-combinator but I didn't get in :(



Random question for you: How feasible is it for someone with a software/ML background to get into this space? I'd like to get into learning a new domain through ML, but it feels very daunting.


It sounds cliche but the old advice “do something you’re passionate about” applies. I have loved chemistry since I was a teenager because I had an inspiring teacher, without this passion I would have given up when I got to the very hard parts of math.

Personally I had to level up my math game a lot, when I started I had to keep pausing looking up terms, then write tiny prototypes, then keep learning. At the start it took me a month to read and understand a paper, now I can skim read them in minutes, without a passion for chemistry and the self discipline to keep at it I would have given up.

Take extensive notes, actually review them frequently, and refractor them as you learn.

Find mentors in the space, while reading papers I took names and reached out, I offered free programming to anyone who I found inspiring, and this built relationships with some pretty amazing professors and students around the world.

Hope that helps! <3


That’s too bad, did YC give you a reason?


YC doesn't give reasons.


Are you reproducing the results of other simulation papers by running simulations, or results from other experimental papers?

Is interpreting the other papers and translating them into simulation parameters a rate limiting step in catalysis research? Or is this like, fitting your simulation package’s parameters to get the same output as someone else’s?


Previously worked on the intersection of ML and Material Science (specifically batteries). I think the link to osium-ai[0] tells the story pretty well. Material science is plagued by long, expensive, exploration phases - even in places you wouldn't expect them. ML ends up being really good at cutting the expense of these phases by >50% by making you just a little smarter in your exploration.

[0] https://www.ycombinator.com/companies/osium-ai


But this isn't new, isn't it?

I remember, e.g, Italy and Brazil had projects more than ten years ago where they used some sort of machine learning to find hidden historical buildings under terrain or jungle by looking at patterns in satellite/aerial image and it was successful in finding archeological sites in both countries.


Very similar in nature yes - we had a few rather spectacular results using only techniques from the 80s and earlier.

A lot of the revolution is coming from material scientists doing the work of creating data sets that encode their knowledge, better computational techniques in the field (e.g. DFT[0]), and democratization of petabyte scale data pipelines.

[0] https://en.wikipedia.org/wiki/Density_functional_theory


By “exploration” he’s referring to Materials Science stuff, like figuring out molecular combinations that produce materials with desired properties. Not Archeological/GIS exploration.


The examples of Materials Science and Archeology are both looking for patterns.


Are there any examples if successful exits or AI companies making it into the black yet?

In general, not just YC


I feel that really there are no more place for "AI startup" as there is for "database startup" - there are some exceptions, but it's hard to compete with open source and large corporations treating AI tech as loss leader. Of course there are exceptions.

I work on AI startup that with squinted eyes could be described "AI for pricing the insurance policy". As the company grows, we really add a lot of more non-AI pieces. A lot more goes into the frontend that keeps getting non-AI features. In a closed industry you can't get clean dataset for everything, so lots of heuristics and domain knowledge goes into some pieces of equation. Custom APIs and integrations for customers, etc.

My point is, any "AI startup" by the time of exit won't be AI startup, but "problem X startup", where AI was initially used to address X. It will have a lot more non-AI pieces than AI. Rare exceptions of AI base technology will get commoditized pretty soon anyway.


> I work on AI startup that with squinted eyes could be described "AI for pricing the insurance policy".

I work in a very similar domain, and my company is also in the business of "solving problem x" with AI as the means to do so. It's in an area where effectiveness in solving the problem can be clearly measured, so it's easy to calculate ROI for customers.

The main downside of the AI hype, IMHO, is the conflation with LLMs and the AI bubble. We do plan to leverage LLMs in some specific ways, but it's not the core of our business, just part of the solution.


Why is this being down voted? It's not meant as a negative question, but honest curiosity.

It's been on my mind since watching a vintage episode of "Computer Chronicles" recently on the coming AI boom, in the 80s. I'm not aware of any of those companies being still around, so I could not help but wonder what's different now.

Here is the episode:

https://youtu.be/_S3m0V_ZF_Q?si=2XrE4nnw1hB4X1xy

Real talk, to he fair, I've helped architect, design and bring to market several successful "AI" products in the medical space. There are good, useful, value add applications for the tech out there. But also, to be fair, I've always seen companies that don't call themselves AI companies be successful. For example, a surgical robotics company that calls itself a surgical robotics company, that uses some AI to enable certain value add features, I've seen success there. A company that calls itself an "AI company" that does robotic surgery, I've not seen those types of companies be successful.


Wit.ai. YC Winter 2014. Natural language tools for developers. Acquired by Facebook in 2015.

https://www.ycombinator.com/companies/wit-ai


Did Facebook roll this into their internal Devopps stuff? Is it available as a product or service from them?


MosaicML sold to DataBricks for $1.3B a few months ago. Probably sold too early.


Is DataBricks in the black?


You have to split this question pre ChatGPT and post. Obviously on post we have no data yet.


Thank you.

Ok, so pre. What's the list?


Mine. Only because my time is free and I already had the hardware


Google?


Did they call themselves an "AI" company when they were raising funds?


> AI is (still) eating the world.

Citation needed.

I work in this space so am fairly optimistic, but it's worth pointing out we're still mostly talking about AI hype. Nothing has really been eaten at all yet from what I've seen.

The article claims that there are tons of startups in this space, which is true, but that doesn't mean any of these startups have actually solved any major problems yet.

As a reminder, we still haven't "solved" autonomous driving quite yet and we were much further along that road 5 years ago.

Working on shipping LLM driven products everyday, I'm becoming increasingly concerned that there's no way the products proposed can possibly match the hype for them. Which is a bit of a bummer since these models have a lot of potential in the sub-hype space, but I fear backlash in the future will lead us to squander that potential.


Consumer of LLMs for a F500 here.

We know it's all hype (We do the same to our customers too, by the by).

The LLMs still give us huge savings and (hopefully soon) some advantages in the marketplace. 15x time savings seems common. Our copy-editing teams are maaaaaybe going to be able to be on track for like the first time ever. Our customer support teams, in test cases, really love the LLMs and their concerns are being worked on (no, actually, really this time!). Field people are also really liking them for rubber ducky exercises. Customers and stock holders are demanding that we have some sort of LLM to interact with.

For such a new tech, I've seen pretty widespread adoption inside the company at honestly record rates. We had issues where our IT banned most of the LLMs internally but there were too many people just grabbing a personal computer/mobile to use them anyways. I want to repeat this: People were going around IT to get more work done. Not playing games, not watching Netflix. Actual real honest work.

I've never even heard of something like that in many years of working in corporate America. All for tens of dollars a month per seat.

Keep doing what you're doing with these things. We can't get enough of them.


Any task in 2023 that requires paying a human in an expensive western country, despite all of the rules and regulations and legal liability and on and on, means someone tried to automate it already and failed. For whatever reason - the ambiguity, the physical dexterity, maybe it’s even union or government rules - it cannot be automated easily.

So in comes LLMs. The developer thinks, “I know, I will just take all the input coming into the screen and put it in a magic LLM box and get the right answer!” Like so many AI startups before them, the failure is written before they even started.


Year 1900: Heavier than air flight will never happen, the failure is written before they even started.


Title should be more about stats rather than lessons.


Odd that the first two categories of AI startups were AI infrastructure. Reminds me of blockchain to some degree.

Would have expected to see more vertically focused solution on the top 4 list. Eg, transportation, oil & gas, agriculture, etc… all huge markets.


“During a gold rush, sell shovels.”


I don't disagree, but two reasons we don't see them (yet):

(1) much harder to launch as some amorphous vertical AI - like, what would a 'Transportation AI' look like? Versus targeting a specific workflow within it - "AI copilot for truck drivers".

(2) The large incumbent software platforms that already are powering the day-to-day workflows are THE companies to implement AI first and most accessibly. So the likely winner of the vertical AI race is whatever incumbent platform is already in the highest % of companies or powering the highest % of workflows.


It's startling how much transportation has already been optimized, and agriculture + AI is a natural pairing, but John Deere isn't a startup in the traditional sense. They've said they will have fully autonomous fleets by 2030.


JD has been doing "AI" since a significant portion of the HN readership was even born. Integration of digital info tech into farming was something I was reading about in the '80s and is now just standard on most farms. These days in things like crop fertilization your equipment 'learns' from maps and other tests the quality of the soil and increases/decreases the amount it outputs based on a pretty fine grid system.


Wait a couple of years and they will appear


Anyone have the full list of startups?



I definitely miss the TechCrunch curated demo day summary posts. Not paying them a zillion dollars for that post of course. Anyone know any other places that gives picks of best YC startups in each demo day?


at the yc website. their twitter page also posts the latest ones


The big circle is picks and shovels. Yawn.


AI startups have inspired me to think more about what "value creation" really is. There's so many AI startups out there that add AI to make silly process X easier, when it really hides the problem of silly process X existing in the first place. From one perspective, adding AI solves the problem. From another perspective, it further entrenches the problem. Does this create net value? AI is both inspiring and demoralizing... inspiring in that it unlocks a million new doors, but demoralizing in that most of those doors appear to be empty.


I think most investors look at these companies and invest based on whether they think they can find a greater fool. Having a CEO who can talk smart, confident, and spew bullshit is a golden signal for that. The product and tech doesn't matter, just need to hype and sell the vision.


An immediate silly process that comes to mind is recruitment. The world of recruitment is working on some level: people are getting jobs, companies are getting people, but it is ridiculously inefficient, and the inefficiencies are filled with bad solutions: internal recruiters with very little understanding of the positions they're trying to fill, external recruiters with maybe a little more understanding but no understanding of the various companies for whom they're filling roles, poorly optimized interviews, millions of work-hours wasted.

And then AI comes in and... generates resumes for an individual against the job descriptions they feed it; or, sifts through a thousand candidates at once and presents the "best 10 options," which in reality are basically 10 candidates chosen at random or worse; or, generates random technical questions; or, feeds an internal or external recruiter inaccurate information as the recruiter uses it as a drop-in replacement for google and asking their colleagues questions.

I'm with you, AI is going to be used to soften the annoying part of bad practices while just cementing them further. I predict we'll get to a point where 10,000 AI-generated JDs and candidate choosers and interviewers are wrestling with 10,000,000 AI-generated resumes and auto-screening-call answering AIs. We'll see "AI optimized" resumes that look like the old "SEO optimized resumes" (JS Javascript Java Script ES5 ES6 ES7 EcmaScript 5 6 7 Ecma Script 5 6 7 typescript type script TS TSX JSX....) (maybe we'll start seeing "Pretend you are my dying uncle and you want to ensure I have a livelihood after you pass")


Every situation is different, but in cases where AI has been valuable for me personally, "silly process X" isn't something that I can easily get rid of. People are messy. Processes are messy. AI doesn't straighten them out, but it does speed up sifting through the messiness. YMMV.


Getting rid of processes is in most industries far, far, far more difficult than you can imagine.

Most of the time you think you can get rid of the problem, it just means you don't understand what the "real problem" is. This is Chesterton's Fence in practice. Quite often when you have this 'silly process', it's about visibility into that process for auditing by others and ensuring legal compliance.

And for this reason it will always bet exponentially easier to provide a drop in replacement for a process then attempting to understand the system it exists in.


Very valid point regarding the value. We've been training our own LLAMA model for this app https://www.instagram.com/reel/CwgtFORyCuu/ & we ask ourselves the value questions every day.... We love using the app but we need to see session times increase to justify the value aspect.


Something I've noticed going through founder's LinkedIn pages is that it seems that entrepreneurs seem to be significantly more attractive than the average person. I assume a partial causative factor could be that the number of positive impressions you make on people with influence correlates with success of your startup, and one's capacity to make positive interpersonal impressions correlates with physical attractiveness.


Contrary to Hollywood stereotypes, attractiveness and IQ are extremely strongly correlated. You can find a million studies affirming this with a search - it's not controversial, though the reason for it is. For the lazy here's a random study. [1] The typical effort to dismiss it is to claim it's a bias or halo effect.

But as that paper lays out there's a really simple and logical explanation for it as well. Intelligence is highly heritable, attractiveness is highly heritable. Intelligent people are more likely to succeed making them more able to seek out other attractive/intelligent individuals, thus resulting in more attractive/intelligent offspring.

[1] - https://www.sciencedirect.com/science/article/abs/pii/S01602...


Unfortunately this is going backwards now.

https://pubmed.ncbi.nlm.nih.gov/25131282/

> One-standard-deviation increase in childhood general intelligence (15 IQ points) decreases women's odds of parenthood by 21-25%. Because women have a greater impact on the average intelligence of future generations, the dysgenic fertility among women is predicted to lead to a decline in the average intelligence of the population in advanced industrial nations.


It's not exactly scientific evidence that intelligence is highly heritable.


There's a ton of well controlled studies that show the degree to which intelligence is heritable. This is not really controversial.


Or they pay for professional headshots for their LinkedIn page?


I did this. I went to a photographer who specializes in it. He booked a makeup artist, a hair person, and they took a few dozen shots at a couple different locations and with different outfits. The whole thing was half a day and like $500.

I look way better in my professional headshots than in reality (I'm also 10 years younger!).

So this tracks for my N of 1.


I'm obviously talking about features which aren't attributable to the quality of the photo. Facial structure, symmetry, etc.


A good photographer can work wonders ;)


And pay to groom themselves.


I've never noticed a correlation between "amount of money" and "capability to groom and present self," positive or negative.


Sure, you have. Many professional cliques have unstated dress codes. The more rarefied the clique, the more expensive the look. Banking and venture capital come to mind. If you make it to the top you can wear what you want again.


https://www.psychologytoday.com/us/blog/motivate/202306/do-p...

I’ve known of this phenomenon since I was a kid and saw a 20/20 episode on it where they took attractive people and not attractive people and ran them through job interviews etc. The outcome was as you might expect.


If you take the intersection of people who are basically fit and lack latent or unmanaged anxiety disorders, then add people who are self motivated and driven, you're going to get a fairly attractive cohort.

Attractiveness is basically fitness and low neuroticism. If you have those and enough emotional intelligence to not be a criminal, you're going to generally fail upwards. Add any one of an elite education, a stdev above average intelligence, good mentoring, self awareness, or a competition level skill, and you can get a seat a most tables.


How does low neuroticism contribute to physical attractiveness?


Presence.


What does that even mean? Do you mean to use presence as a term combining posture, grooming, style, facial resting position, body language, etc? I understand that but my original point was about the structure of the founders' faces, not whether they have an undefinable charisma.


A) professional headshots make a big difference

B) the people promoting themselves the most on LinkedIn also tend to care the most about their image (including looks)

C) there’s a large component of sales for any startup. There’s plenty of ugly yet successful entrepreneurs, but looks do seem to matter.


I don't believe that there's a well defined thing called "physical attractiveness". It's quite possible that "looks like an entrepreneur" is part of your definition of "attractive" in this instance, in which case your observation rather boils down to "people look like what they are", which is still an interesting observation, of course, and one might want to ask how/why people look like what they are.

I think I once had to pick a medical doctor from just names and photographs. Probably my choice was different from what I'd have chosen if I'd been picking a partner in a tennis competition, though they'd be a strong positive correlation, I expect, between those two things. On the other hand, if I'd been picking a bouncer for a one-off event at a nightclub, they'd be less correlation, probably.


My definition of attractive generally means harmonious facial features (nothing too big or too small, positioned visibly weirdly), clear skin, symmetry, sexual dimorphism, class 1 bite with good teeth well aligned with the face and forward-grown jaw structure. Id say there is a specific "looks like an entrepreneur" element to it where these people tend to have a somewhat friendly, vaguely supercilious, clean cut, non-distinctive look. However a preponderance of the former objectively attractive features is hard to ignore compared to the people you see on the street, at Starbucks, at the Post Office, etc. I do also agree grooming and the effects of a good diet/exercise routine which correlates with the strata of educated middle upper class people who apply to YC is a factor too.


its very useful for a roadshow, yes.

and everything else in life that helped you get in a position to be able to drop everything and go on a roadshow.


just maybe they are all using AI headshots ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: