Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What uses have you found for ChatGPT?
5 points by ChildOfChaos on Jan 19, 2023 | hide | past | favorite | 11 comments
I'm still working on a document for my goals and areas to work on for the following year and beyond.

I've found ChatGPT amazingly helpful in rewording goals and in general anything I am writing in the document for guidance.

Yes, it takes some rewriting on my part as well, but overall the net result is much better than just me writing alone.

What has everyone else used it for?




I used it to chat with myself (Reddit profile), since (ideally professional, but at least chat-level) communication is part of my work interest. It seemed to try, but the answers were not so good, more like a sketch of some broad things someone like me might say if they were stumbling. Confidently stumbling, but still stumbling.

I used it for journaling practice. It didn't get at a quality result as fast as my own tools here do, but I can see why others would find it helpful. Particularly if they are new to journaling and somewhat directionless in that context.

I used it to compare my written coaching services, as I used to offer email-based coaching. This result with ChatGPT wasn't very good. At a minimum it is not familiar with ongoing qualitative evaluation of the communicative relationship, and aspects like identifying & working around blind spots, but more experiments here would probably be needed to really get at a good explanation of the effect.

I can see why it's useful for you, given what you wrote.

Overall I'd say it's very similar to my experiences in making art with fractals [1]. If you want to poke the various software packages, interact a bit, and kind of sit back and observe the results...probably so-so outcomes. If you are willing/able to develop & provide subjectively qualified input on the results though, the outcomes will probably be much better & much more useful.

But you should ideally have an outcome standard in mind, even if you are open to whatever the outcome might be.

I do have some privacy concerns and would like to know more about the privacy aspect, plus the aspect of getting a good, above-board dataset with regard to input sourcing, and using that dataset offline.

Fun world we live in though, hope to see more upgrades & uses too.

1. Shameless! https://www.instagram.com/marcolas/


I've been experimenting with it to write biographies about different artists that I know. The artists don't have any particularly outstanding online presences, so ChatGPT doesn't know anything about them on its own. It also won't reach out to external webpages which is understandable

Instead, I've been basically describing the artist to ChatGPT and telling it to restate or rephrase what I said as if it were a professional bio. Results were not significantly better than what I would casually write on my own. I noticed a lot of repetition in the results too. I can't blame it for having limited information to work with, but it is interesting to see it work at least


Learning mostly!

There is another post about another language model using RNN so i had chaptgpt have a look at it:

just by showing it the 3 source files:

> The first code file defines the architecture of the model, including the custom operations and classes used to handle different float precision modes and device placement. The second code file sets the hyperparameter settings and loads the pre-trained model's weights. The third code file defines several utility functions and classes to tokenize input text, refine and process text input, and sample the next token in a sequence given the current context.

>The overall goal of the project is to generate text by training the model on a large dataset of text, the model can be fine-tuned on a specific task, such as language translation, text summarization, or text completion. The model can also be used for generating text in a conversational context, such as chatbots or virtual assistants, where the model generates text based on the current context, and the user's previous inputs.

and

> I can't determine if the code achieves the claims of the project, as the model has not been trained and run on any specific dataset and it is not clear what kind of performance results the developers were able to achieve with it. However, the approach used in the code appears to be logically sound. The code implements a standard transformer-based language model architecture, but with a custom implementation of the operations in order to handle different float precision modes and device placement, which is a common technique to optimize the performance of deep learning models. The code also includes some utility functions and classes that handle tokenization, text processing, and sampling of the model's output, which are also common in large-scale language model implementations.

> It would be important to run the model with a large dataset of text, fine-tune it on a specific task and compare its performance with other state of the art models, to measure its quality and scaling.

if you continue this road it can teach you the basics of machine learning really fast and can even get you started with a vs code dev container for it.


I’ve mostly used ChatGPT for finding functions in more obscure or densely documented APIs, it shaves some time off when I go “I know Maya must have a python function that can do this” and I can ask CharGPT without having to dig through their documentation. I would say it’s been about a 60% success rate, 20% of the time it gets it completely right, 40% of the time it gives me a good starting point and the other 40% I’m stuck digging through documentation anyways.


We’ve been using it to make quests :-)

https://questgiver.org/t/05c0a7d7-5f22-41d2-90aa-f02082fbb8d...

These were entirely generated based on an intro prompt + “Make three quests people could do to help ensure ethical AI development”. It generated a json with objective types, emojis, everything.

We’re planning to use it to get past the “blank page” problem when onboarding organizations.


Generally tedious things. Reading through bug logs and finding problems, writing regex, helping me remember that thing at the tip of my tongue.

It's also helped in rewriting wordy text into something easier to understand, but there are better tools for this (e.g. QuillBot).

For actual coding, the core OpenAI codex works a lot better, or running through Github Copilot.


Story seed ideas for improv role playing games, or quick, off-the-cuff stories for kids.


I actually find ChatGPT quite poor at this because it doesn't understand story structures. It will do a "slice of life" thing, but it's not very interesting.

I actually trained the davinci engine on some Aesop summaries, and managed to get something uhhh... different.

sample:

    The Princess and the Cat

    A Princess was given a tiny Cat by her godmother as a gift. The Cat was very special and could talk. One day, the Cat warned the Princess of an impending danger. But the Princess laughed at it and said: "You are only a silly cat, how can you know about such things?" Sure enough, the danger came true and the Princess realized that she should have listened to her pet.

    Moral: Don't underestimate anyone; you never know what they might be capable of achieving
It's not perfect, but they're not meant to replace human writers yet.


Puppet scripts for kids.

Clarifying, explaining, and converting bash scripts to other languages.


Generating uses for ChatGPT /hj


Generating copies




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: