Hacker Newsnew | past | comments | ask | show | jobs | submit | colinhb's commentslogin

Skipped the metaverse, slotted between the two


Can it self-drive a Tesla?


Neat.

When working in a linguistics lab as an undergraduate long ago, we looked at spectrograms to identify sounds (specifically places of articulation) as much as listened to recordings.

So it makes some sense to build a model on them rather than some other representation of the sound.


I had friends in MIT's computational linguistics group back in the 1980s who did a casual get together where they'd take turns handing out spectrograms of human speech and for the rest of the group to try and interpret. Apparently this started with some noted researcher asserting that you couldn't interpret a spectrogram faster than it was originally spoken, and one of them decided to learn them well enough to disprove it by example, which turned out to be easy but inspired them to continue by trying to make more challenging ones - culminating in getting John Moschitta Jr. to record for them :-)


> is a polynomial inequality in 26 variables, and the set of prime numbers is identical to the set of positive values taken on by the left-hand side as the variables a, b, …, z range over the nonnegative integers.

I hadn’t heard of this result, and my exposure to Diophantine equations is limited to precisely one seminar from undergrad, but this feels like taking von Neumann’s famous quip to its most fantastical extreme:

> With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.


It's difficult to draw meaningful conclusions from country-level differences like this, especially when the countries are as dissimilar as the U.S. (pop. 340M) and Singapore (pop. 4M).

Looking specifically at PISA, it's not usually administered at the state level in the US, but when it has, individual states have outperformed national scores, as one should expect.

For example, Massachusetts has scored similarly to Singapore in Reading and Science (zero or small statistical difference between) and not far in Math.[1] It would be a reasonable hypothesis that a PISA score for the Greater Boston schools (pop. 5M) would even further outperform the U.S.

Sweeping country comparisons tend to amplify noise rather than reveal a clear signal, and are often about regression to the mean as much as anything else. It's not impossible to make sound inferences, but it's difficult to avoid motivated reasoning.

1. https://www.doe.mass.edu/news/news.aspx?id=24050


Looks great! One thing I’d suggest (which still isn’t clear to me but interested enough to investigate later): make the note taking workflow clear…

* Is this a bunch of titled markdown docs organized (conceptually) into folders/hierarchically?

* Is this a bunch of untitled/title optional “cards” organized by tag?

* Is this a long, markdown document, which you append to?

These are similar to different mainstream and less so note taking systems, and would appreciate understanding what workflow you’ve designed and optimized around.

Saying you started as a Google Keep user is helpful, buy I’ve only used other systems (Homebrew textfiles, Simplenote, obsidian, etc), and have some concepts around what Evernote and OneNote are like, so giving a couple more signposts on usage would be helpful.


Thanks for the feedback. You're right I didn't really think about the fact that you may come from a very different system with different concepts. I'll explain the workflow better in the demo notes.

To answer your questions quickly, I usually keep very small notes just a few lines or todo checkboxes. One note per idea. But sometimes an idea grows over many days and that note gets much larger. There's no limit to how large a note can get.

You can always set the title of the note using # which is standard markdown, or leave a blank line after the first line and it automatically becomes the title.

There's no concept of tag per se but you can write #someTag and then literally search for #someTag. The search feature is just substring search over all the notes (no stemming or anything fancy).


Thanks for the clarification, still haven’t really played with it, but take it these are the mechanics (not the workflow)…

* You have a big chronologically ordered list of notes

* By default, all notes are in view

* You can make a title (instructions in demo), but significance of title is only internal to note (not for ordering/management)

* Big list of notes is union of two disjoint sets: pinned and unpinned

* Can view either all notes, pinned notes, or un-pinned notes

But not sure if I have that right. FWIW appreciate some of the design decisions I’m seeing, just haven’t had time to poke around to understand.


You're right. The order is chronological with the pinned notes at the top. User can toggle showing of pinned notes. There's also archived notes which you can find in the main menu (top-left corner).

In the beginning, I intended to add dragging of notes to change the order and organizing notes in folders. But, honestly, after using it for a while I realized that I don't really miss those features at all even though I have thousands of notes. As long as the search is fast I can always find what I need quickly.

Again, thank you. I'll add this info to the demo note.


Ah got it - pinned notes float to the top

FWIW agree with your call on not building too many organisational tools in

Thanks again for sharing; look forward to trying it out


Archive link: https://archive.is/zwJbj

This piece is responding to an Economist op-ed by Bret Taylor and Larry Summers representing the OpenAI board, and comes to many of the same conclusions I did.

- Economist: https://www.economist.com/by-invitation/2024/05/30/openai-bo...

- Archive link for Economist: https://archive.is/rwRju

IMO key paragraphs...

> The review’s findings rejected the idea that any kind of ai safety concern necessitated Mr Altman’s replacement. In fact, WilmerHale found that “the prior board’s decision did not arise out of concerns regarding product safety or security, the pace of development, Openai’s finances, or its statements to investors, customers, or business partners.”

Comment: I'm surprised that they don't refute any of the concerns about the CEO, and if the investigation was so redemptive, they should release the findings. (It must be that it wasn't, so they won't.)

> Ms Toner has continued to make claims in the press. Although perhaps difficult to remember now, Openai released Chatgpt in November 2022 as a research project to learn more about how useful its models are in conversational settings. It was built on gpt-3.5, an existing ai model which had already been available for more than eight months at the time.

Comment: As someone who had access to OpenAI models prior to the release of ChatGPT, it's disingenuous to say that GPT-3.5 was "available". Yes, available to enrolled researchers willing to suffer through the tools to interact with a model not fine-tuned on conversation.


Archive link (for paywall): https://archive.is/rwRju

Key paragraphs...

> The review’s findings rejected the idea that any kind of ai safety concern necessitated Mr Altman’s replacement. In fact, WilmerHale found that “the prior board’s decision did not arise out of concerns regarding product safety or security, the pace of development, Openai’s finances, or its statements to investors, customers, or business partners.”

Comment: I'm surprised that they don't refute any of the concerns about the CEO, and if the investigation was so redemptive, they should release the findings. (It must be that it wasn't, so they won't.)

> Ms Toner has continued to make claims in the press. Although perhaps difficult to remember now, Openai released Chatgpt in November 2022 as a research project to learn more about how useful its models are in conversational settings. It was built on gpt-3.5, an existing ai model which had already been available for more than eight months at the time.

Comment: As someone who had access to OpenAI models prior to the release of ChatGPT, it's disingenuous to say that GPT-3.5 was "available". Yes, available to enrolled researchers willing to suffer through the tools used to interact with it. And beyond that, ChatGPT as a product, used by humans, is a world apart from a trained model: https://openai.com/index/chatgpt/

They shouldn't have written this pieces. Makes them look worse.


Agree in part but also I think Terry is such an outlier (though also generous and humble) that it’s hard to extrapolate from this example to a more general case


Different issue -- the medium that the cells grow and multiply in has FBS as constituent. Most of the cultured meat companies rely on it, though upstream there are a number of companies working to develop "serum-free" media with the goal of selling into the cultured meat companies. It's also an active area of academic research, since FBS is not a chemically-defined product, it introduces variability into cell-culture protocols.

Example: https://www.nature.com/articles/s42003-022-03423-8


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: