Hacker News new | past | comments | ask | show | jobs | submit | viscanti's comments login

A16Z


Also Peter Thiel I believe.


Look at Palantir stock. It has gone brrrrrr since announcement.

VCs associated with All-in podcast are heavy Trump supporters. Crypto companies funded 100s of millions to Trump super pacs. Coinbase comes to mind.


Also David Sacks I believe.


I read an interesting article recently about Thiel's, Musk's and Sacks' common career paths, views and ambitions.

One point that stuck out for me was how all three of them had spent a significant part of their childhood in Apartheid-era South Africa, as part of the white ruling class (so learned of Apartheid as a good thing or even had family that belonged to the ruling class). This is well-known for Musk, but I think less so for Thiel and Sacks.

Seems to me that gives their ultra-libertarian, "anti-woke", pro-inequality views another context.

The article is in German, at https://www.derstandard.at/story/3000000243890/tech-investor...


Yes Peter Thiel, but he’s the only other one I’m aware of.

That’s not exactly a groundswell of support from Silicon Valley.


Got a source for that?


This kind of proves the point? Presumably your mother didn't buy the latest phone for "continuity" or camera improvements. The features and additional hardware improvements might be noticeable after being used, but are they driving sales to people who aren't tech enthusiasts?


Because of how trivial that step is, it's likely pretty easy to just take lots of code and minify it. Then you have the training data you need to learn to generate full code from minified code. If your goal is to generate additional useful training data for your LLM, it could make sense to actually do that.


I suspect, but definitely do not know, that all the coding aspects of llms work something like this. It’s such a fundamentally different problem from a paragraph, which should never be the same as any other paragraph. Seems to me that coding is a bit more like the game of go, where an absolute score can be used to guide learning. Seed the system with lots and lots of leetcode examples from reality, and then train it to write tests, and now you have a closed loop that can train itself.


If you're able to generate minified code from all the code you can find on the internet, you end up with a very large training set. Of course in some scenarios you won't know what the original variable names were, but you would expect to be able to get something very usable out of it. These things, where you can deterministically generate new and useful training data, you would expect to be used.


> I can't understand the hate

I think it's because of the promises of the team (new Large Action Model) vs what's actually being delivered (the model is some scripts). The team has a history of over promising and underdelivering (or scamming - depending on your perspective). It's also economically unviable. Somehow you're meant to get free LLM calls for life but there's no way for them to actually cover those. There's not really any communication about how it might be a limited time thing for early adopters or how it could ever get to be sustainable.

If they had focused on what they have, they probably could have charged the same amount and people would generally be OK with it. But they've over promised and under delivered again. I think the reaction is pretty understandable.


> The team has a history of over promising and underdelivering (or scamming - depending on your perspective).

Their attitude of not communicating anything and basically inventing stuff the R1 can do without actually having the engineering to back it up is what is "scamming" to me. Over-promising and under-delivering is one thing, but lying about what something can do and then going back to the engineering team to "just make it happen" is what I am reading between the lines here.


It seems to be difficult to turn the pure research back into new products. Apple famously got lots of ideas for free from Xerox PARC. Google researchers wrote the Attention Is All You Need paper and they're now desperately playing catchup because they couldn't convert it to any kind of product. There's nothing wrong with companies investing in pure research, but these large companies sometimes are unable to take advantage of the research. The people running the business want to keep doing what got them successful, not some new experimental thing that might not work.


> Google researchers wrote the Attention Is All You Need paper and they're now desperately playing catchup because they couldn't convert it to any kind of product.

This isn't true. The transformer underlied Google Translate for a long time. They just didn't monetize Google Translate heavily enough. It's still one of the best translation services out there. And its ability to translate real-time conversations has been around for years now.


Yeah being first doesn’t mean you win automatically. There’s a story about the Ramones playing a show at a famous club in NYC and everyone in the crowd went home and started bands that became way more successful and famous than who they were trying to be like. …I think blondie was one of the bands that came out of that crowd.


You might be thinking of the Sex Pistols gig whose audience of 30-40 included Morrissey, Mark E Smith, the Buzzcocks and Lower Broughton: https://www.bbc.co.uk/manchester/content/articles/2006/05/11...

The docudrama 24 hour party people is good to watch about this era.


Sorry, mental blip, replace "Lower Broughton" with "Joy Division" (!)


But everyone and their mom would rather be the Ramones than Blondie, LOL


No one has ever brought a native (not 3rd party) calculator to the iPad before. Apple is the first.


> No one has ever brought a native (not 3rd party) calculator to the iPad before. Apple is the first.

I'm not sure I understand... Considering apple makes the ipad, wouldn't all ipad calculators other than apple's be 3rd party by definition?


On device or in an Apple owned DC. It sounds like they have aspirations for their own Apple owned LLM. ChatGPT seems like it's there until they can get something good enough to generally replace it for cases where their in-house solution isn't capable enough yet. They likely continue to invest heavily on big capable LLMs as well as ones that are small enough to run on device (while working on the hardware side to ensure they have the device capabilities to run more powerful models on the device).


The benefit of owning the last mile to the customer is that you can choose when you want to replace default Maps, or not.


So, the company that brought us Siri is going to build something better than ChatGPT... something that will run on-device no less. It's just not quite ready yet. Got it.


Siri was quite impressive when it came out. I just felt it never got significantly better until it became an embarrassment


It never was impressive. It only made cool demos and Apple aficionado worked the reality distortion field like crazy since then. It's actually embarrassing how bad it has always been compared to the Google stuff announced later with less fanfare.

I don't even care much because I don't think "assistants" are good for much of anything, but if I have to use one, Siri is not the one I would like.


Yeah, by having literally a shitton of cash, and having bought multiple ML startups over the years. Plus it’s not like they couldn’t make Siri better, multiple projects had internal problems with Siri and were trying to get it replaced, but none went anywhere, possibly because higher ups planning ahead with this.


They say it's going to be free forever with no subscription, but they have to pay for chatgpt API calls. Even if you forgive them for overhyping their chatgpt wrapper, they're still a ponzi scheme.


If you know a great deal about what is right and wrong, and you choose to do something bad, that feels worse than being bad and not knowing any better.


>If you know a great deal about what is right and wrong, and you choose to do something bad,

The issue is that when you start studying ethics you'll learn pretty quickly that what's right or wrong isn't exactly that obvious. You don't go to a philosophical ethics class and get taught what's good and bad like in church, you get taught how to think about the many ethical systems that exist.

That means you're going to enounter the ethics of Max Stirner, who was so radically Egoist he makes Ayn Rand look like a puppy loving communitarian, and you're going to encounter Jesus Christ. That people who study ethics for a living often have views that seem so unethical to most people isn't really surprising simply because they're exposed to such a broad range of views.

People who have are perceived as ethical are usually the people least exposed to ethics as a field of study, because they're exactly the people most likely to adopt the beliefs of people around them.


For some people, the comments are the worst part of YouTube. I could see them being pretty vocal about not liking the design that makes them more visible.


I have the feeling these comments went from overwhelmingly hateful tu overwhelmingly praiseful. I don't know how they managed that transition. Moderation or culture change? They are useless either way.


Youtube comments used to be the worst comments on the net, but for a while now youtube comments have been decent. And no better or worse than reddit comments yet people go to reddit just to read the comments. I find myself often reading comments alongside youtube videos.

I agree with this redesign since it turns the right side of the video into a general social space: chat when it's a live video, comments when it's a normal video.

Related videos, on the other hand, are static and can be easily moved beneath the video.


Truly. I have an extension that blocks them ("Hide YouTube Comments, Live Chat, & Related"). Every so often I will unblock them for a certain video, but life is not better with them for the most part.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: