I’m a company of one, and I use it every day, multiple times a day as a coding assistant. I’m using the excellent Code Genie (sp?) plug-in to VSC which puts a chat window directly in VSC and does cool things like allow you to highlight blocks of code and ask contextual questions / refactors / bug checks, etc.
I would say my efficiency is up ~20% since starting to use it, and my Google searches & StackOverflow visits are probably down 80-90%. At least with respect to this corner of the internet, they are both in mortal danger.
Quick question: How does Code Genie get responses from the GPT API so quickly? I'm developing an app right now and it takes 40 seconds to query gpt-3.5-turbo.
Just curious how the data flow works. Are you sending information to both places, CGenie and OpenAI? Full disclosure, I have never used CGenie. I know you can turn off data collection at OpenAI, can you do the same with them?
The VSC extension is just a conduit to the OpenAI API. You use your own API key. I haven’t dug into the exact data flow, however, so I can’t speak to whether the extension is collecting any or what data itself.
I'm wearing a lot of hats at the moment since we're still in startup mode so the list is pretty varied:
- I ask it to create list of questions that different personas might want to answer with our reports section. I then ask it to categorize those questions & to then provide suggestions for graphs & dynamic filters that would help answer those questions.
- I ask it for help translating emails and documents for international customers.
- I ask it to create markdown, formatted spec documents with really verbose context. Helps me as sort of a foundation for feature sprint kickoffs.
- I ask it to take internal documentation and to simplify it so that we're able to use it for public facing help center documentation.
- I use it as my first resource for asking questions about SQL queries, React patterns, explaining different things eg SVG properties and how to manipulate them. I gut check things with Google when I feel like it might be hallucinating but generally it does really well ~ 90% of the time. Saves lots of time compared to going to Google first.
- I ask it for help writing tests or help refactoring code
- I asked it for help in creating some policy & procedures docs that we needed for compliance. Essentially gives you a decent template to then build from & customize.
- Lots of other things. It replaced Google for so many things in my daily workflow. It also helps a lot when you're not feeling creative and you need some ideas.
> I gut check things with Google when I feel like it might be hallucinating but generally it does really well ~ 90% of the time. Saves lots of time compared to going to Google first.
I was asking chatGPT give me boilerplate for Fabricjs Object today and it added a random object property which was nowhere to be found on official docs. At first I thought it was amazing to get a real looking code but upon testing I was confused how to take this.
I don't use it, it returns mostly incorrect code that makes me realize it's not worth it. I'm more productive and write better code ignoring ChatGPT 95% of the time.
my comment will probably get flagged or downvoted or whatever, but yeah agreed.
I literally cannot understand how people code that need gpt as an assistant for writing code. If I can reason about it, I can write it faster then the feedback loop takes for prompting.
It's for new things: new languages, frameworks, libraries. When you're not fluent, it can be a helpful hand for a beginner, or someone who has to do a lot things that they are not expert in, like a one man band in a sole enterprise or corner of a startup.
It can increase efficiency for generalists. For deep work, it's less useful.
I agree. That so far the super specialist can do better. But they will also find useful when they need to cross the domain that are good at with something that they are yet a beginner. Also, for mechanical things, it is amazing. Like, for helping solving a conflict of a patch with a context, or editing lots of parts in a code with something that it would require multiple regexps.
I always wanted to do this very specific in Django. I don't work with Python,so even though I knew what I want to do,but my skills weren't there to create working code. So I thought I'll throw the problem at chatGPT. The generated code had a couple of flaws, but I managed to fix them in 15 min or so and got a working result. The code also gave me some interesting perspective on how some things could be implemented.
What it generated in Django was very simple and likely far from the quality an experienced engineer would come up with, but at my knowledge level that was enough.
No offense, but I wouldn't hire you based on your resistance to learning new technology.
You haven't spent enough time with GPT-4 and CoPilot to understand how LLMs can save you time. There is a reason why the world's top engineers like Andrej Kaparthy[1] and Guido van Rossum[2] are using these tools, they save a ton of time and work when used correctly.
Personally I'm using these tools for coding, research, and writing. Anyone who doesn't understand how much they can accelerate these tasks when used effectively are going to get left in the dust. I've spoken to colleagues who don't seem to get it as well, it's strange to me
I've had the same experience and agree. I've talked to engineers who initially have the same standoff-ish opinion, then after a few weeks of using the tools end up changing their minds and adopting them.
I find it is really useful for giving me a base to start from, especially when it’s writing code for tools that I am not very comfortable with. It usually won’t spit out the ultimate solution on the first try, but it can give me an idea of how to modify what it gave to do exactly what I want, in way less time than actually writing it.
The use case where it's really good is boilerplate code that you may not remember. For example, I haven't written React code for years now. I know what good React code looks like when I see it because I've worked with it before, but I wouldn't be able to implement something from scratch without googling or copying stuff from Github. I just don't remember the libraries and best practices off the top of my head.
When I ask ChatGPT to do it for me it gives me an excellent starting point. Sure, there will be bugs, but because I know what I want I can spot and fix them immediately. It's much faster to adjust ChatGPT's code than it is to Google around for starting points.
Another example are shell scripts. I only touch bash once every few months and I keep forgetting the syntax for certain operations. Asking ChatGPT to give me a starting point is much faster than googling and visiting 20 StackOverflow posts for what I want.
But I agree with you that for day-to-day work on the same codebase where you have all the context, ChatGPT usually isn't worth it.
We use it a lot at my job. A lot of tasks that used to hour long minor tasks are great for generating with ChatGPT and getting them done in a few minutes
Prompts like these
1. Generate me a build pipeline using GitHub actions that builds a java project with docker and posts JUnit test results to a PR
2. Write me a python program that copies all the Cloudwatch dashboards my production AWS account, replace all the instances of the word ‘prod’ and replace with ‘qa’ and and post the results to this AWS account.
These are two things I’ve done recently that aren’t particularly enjoyable but necessary parts of any software work.
I'm not in tech like most of y'all. I help out my dad with his consulting company for mostly public agencies. The other day a group we're working with wanted my dad to write some media notes for the unveiling of a project that's been in the works for a while. I gave chatgpt a quick prompt of what the project is, who the audience is, how it's important for a certain demographic and for the community, etc. My dad edited it and sent it over to the leader of this group for his speech the next day. Even though it was edited, I laughed because a chatgpt quote that wasn't edited made it on to the government's website for a pretty big announcement.
Other than that I used it this morning while editing a report we hired a team to create. It was helpful to reword some sentences that weren't very clear before sending it off to the client.
Other than that, I mostly use it to brainstorm ideas/give me related concepts to something I'm working on.
Also, not work related, but last week I used chatgpt to create an opening message for a dating app. I knew the gist of a joke I wanted to say related to this woman's interests, but had chatgpt word it for me. There was a lot more to our conversations, but she did at least respond to the opener and we got the conversation rolling. We actually went on a date this weekend, where I had to rely on my own brain's inefficient language model! It went pretty well though.
Haha good question. I...did not. I feel a bit weird about that but I like to overshare, so it will definitely come out. She was talking about how she uses chatgpt to help write abstracts for her engineering PhD papers. Hopefully she won't take issue with it and think I'm not genuine, etc. I think it will be fine though based on our other conversations.
I did actually do something somewhat similar with another girl. She was a nursing student who used chatgpt for statistics homework (as well as to cheat on exams!). After we hung out I sent her a simple chatgpt generated message along the lines of thanks for hanging out, had a great time, should do it again, etc. Then immediately after I sent a message saying that I had asked chatgpt what to say to someone after a date. She seemed to think that was funny...although we never did hang out again (for other reasons).
Extremely good at turning JSON into a Go struct. Just in general, as someone who has spent years writing Python, it’s helped me fill in gaps with Go. This is a killer feature for GPT.. if you’re proficient in one language it can help you rapidly get up to speed in another.
Working for a large international company I use ChatGPT mostly as a code generator for one time tools (for a variety of languages and shells) and also to generate boilerplate text for email answers and document templates. I'm looking at this form a technical perspective and anything that can help my creative side be a bit better is good. A 2023 way of the rubber duck method of debugging, or a 2023 method of generating "manager speak".
We have had strict instructions not to put any code/email/text into ChatGPT, just use it as a virtual person to talk to and get ideas from.
But: the moment ChatGPT v4 can run on-prem in my private cloud things will be going to be wild. One advantage of working in a large multinational is that for everything there is a procedure or a standard. I have 25 years of design documents, source code, test documents, user documents, and a ticket system with 25 years of problems & answers how each ticket was resolved. The moment I can feed that into my local ChatGPT instance the whole helpdesk/support system will dramatically change.
I'm optimistic on the timeline: I think that within the next 2 to 3 years all commercial ticket tracking systems will have their own ChatGPT-like back-end.
Yes. It is a good clue engine. If I can’t solve a problem it will give me something to try. Sometimes misses the mark. Sometimes finds something I hadn’t thought of.
Also great for asking how to in Python questions and explaining ML concepts. Danger is I wont learn Python properly but just remember the prompts!
I try to use it as often as possible as a rubber duck. It feels sort of like pair programming with someone who has a pretty good idea of how most common libraries work, but where the details are fuzzy and a bit outdated.
The HR team uses it all the time to generate initial drafts for job descriptions and the like.
A member of the product team pointed our API documentation at it and asked it to write a simple query. 99% of the code was correct, except it used the wrong header for authentication. Lo and behold, that was something we hadn't documented very well!
We did an employee survey a few months ago on who was using AI tools and how, and I'm confident that usage has gone up significantly since then. I know mine has.
I use ChatGPT sometimes outside of work too, for hobby projects. I'm finding it much less useful for frontend work, partially because it's not a domain that I'm as familiar with and so the shortfalls in the code don't immediately stick out to me. It's good for writing specific things like "write a typescript function to connect to Google Sheets and write x data to a table"
For a personal project, I used ChatGPT to help create a little design tool in P5.js to find balanced arrangements of 5 magnets for a new rotary magnetic bow design. (drag the vertices of pentagon) https://editor.p5js.org/spDuchamp/full/zgtkE2xik
This is a project that I've been totally stuck on for a quite while and it was a breeze to finish with the help of ChatGPT. Using ChatGPT to make quick little design tools in P5.js is a total game changer for me.
I wonder how many such problems exist, where the inventor had essentially given up because there was no easy way to bring in expertise to answer a pressing question.
I use ChatGPT for a wide variety of tasks throughout my day, but my favorite use case is generating one-off utilities quickly. A recent example of this was when I needed to move and reorganize approximately 1 million files from one hard drive to another over a six-week period. Unfortunately, the destination hard drive began to malfunction – I could access metadata, but file transfers to and from the drive would fail almost immediately.
With ChatGPT's assistance, I was able to create a utility that scanned the 1 million+ files on the faulty destination drive and then re-copied those files from the original source drive to a new destination drive, all while maintaining the updated directory and file structure.
Not allowed, because the privacy is non-existant, the licensing is questionable, Microsoft sees everything, and the responses are often wrong and require more work to verify. The risks are greater than the reward.
ChatGPT is the bomb. I use it to create content, to help me find weird words, to help me find films that I totally don't remember the name of but just describe one scene that I remember, just the other day, I needed to find out how to pound nails into my office walls, but my fingers were too big to hold the nail so I asked for a solution using common business items and it listed 10 of them and one of the solutions was to use a rubber band to hold onto the nail which I did and it worked like a charm. I was trying to jury-rig something, but couldn't think of anything. It thought up 10 solutions in 10 seconds. So I didn't have to fiddle fuck with it for 45 minutes, only to give up and have wasted all that time.
I'll ask what kind of electronic equipment to buy. So many things.
I have found a lot of bugs with it, too, though, so I'm getting educated. For example, at least for me, I will ask for a citation of something, and it just gives me urls that to to 404. Never once have I got an actual real citation. So I'm careful about that.
There have also been many funny things, to me, that I have asked and got weird responses that amused me.
We are in content business and one product is in the SAAS. We are using below AI products -
ChatGPT - It's good for coming up with creative SEO Title, and checking grammer.
Chatbase - We recently started using GPT based support and seeing good results, users are interacting with the bot more.
AnySummary - A nice tool to upload any file in any format and chat with it, authors are using it to write articles from podcast and videos, Saving them a huge amount of time.
OpenTools AI - A website to find out new AI tools, my team always keep checking this site to find out new tool which can reduce the cost and time.
MidJourney - All of our images in the articles are from MJ now.
I hadn't coded in ages. I started a hobby project to see whether I could remember enough to "set up a cute chatbot over email" with ChatGPT's API.
At some point, ChatGPT asked me what added value users would get from the project. That was an interesting question!
It was also an excellent example of how this project became more of a collaboration instead of a one-way street of asking ChatGPT to do stuff. The back-and-forth brings in added value beyond each of our contributions.
So we (the AI and I) decided to document this collaborative journey on the project. So far, we've worked on:
1. Planning the project and pieces of it: excellent for frameworks, ideas, and feedback.
2. Coding + tech setup: we wrote about six superpowers, from choosing between options (like AWS vs. GCP vs. Zapier) to learning code best practices and more.
3. Building a website with ChatGPT and other AI tools in a day.
I use copilot and chatgpt to help me write terraform code, which i’m still very new to. It’s decent at showing me which IAM role policy data attachnent thingy I need to add next.
It’s not that it’s particularly amazing, more that the google hits I get when searching for my problems are littered with SO mirrors and low effort blogspam. If this was ~7 years ago I think the search results could have gotten me just as far.
For a personal project it helped me work through an area of the code that would have otherwise been a mental blocker for me because I wasn’t sure where to start.
At work I’ve been using it a bit to improve feature/design concepts. It’s helped me come up with some unique improvements (and some really boring and generic ones too.)
Asked it to write Python code to dump data from Dataframe to GCS and it wrote a perfect function that I can immediately plug in my class.
Also asked it to write a few commands for BigQuery admin, so far so good but I always double checked due to the nature of those commands.
For my study I asked it to write a C program about processes. It got it almost right except for one place that costed me a while to figure out. But still faster than me banging random stackoverflow doors. It also helps a LOT giving me information of which kernel source code file ti find a random functionality.
My real concern is that I'd rely on it too much whenever I couldn't figure out something. That's fine for work because work is boring anyway and I want to end it asap, but not good for study because I want to grow grit.
Debugging - I just throw 100s of lines of logs and it's pretty good at figuring out what needs to be done ... Summarizing papers
How does that work, given the small context window size? I've tried dumping a lot of text into the ChatGPT window and it just ignores most of it, or complains about the input being too long.
I’ve experimented with it a bit and found it can’t really do anything you might consider specialist.
If I ask it detailed questions about the topic I did my PhD on, it can’t answer correctly. I worked on fluid dynamics for a couple of years in my first role, and there is a well known algorithm called SIMPLE for computing time evolution of steady state problems, and it couldn’t generate code for this even with lots of playing with prompts.
I do web stuff more these days and I couldn’t get it to output a fully working React hook in Typescript that POSTs with Axios; there was always some sort of type error.
I've been using it a lot to think through development builds. I'll tell it the architecture that i'm thinking, the tech stack, and go back and forth with it. It saves a lot of time and helps me think through my process. That's my main use case (with copilot to finish the deal). The token limits are a pain for full code generation.
I'm working on a site right now where I use the API to generate me full summaries of AI sites. Features, excerpts, descriptions. It works really nice. I find it works best when you split the prompts up.
Your directors are more creative than mine. Our CyberSecurity had ChatGPT non functional within days of it being released publicly. Apparently it's too much of a risk, or some nonsense.
I understand the frustration but I have sympathy for your security team. As a former security engineer you'd be shocked at the nonsense I had to clean up from our devs.
One good idea I have is, since it’s a generative model, instead of asking it mechanical or electrical engineering questions, ask it to make a program to calculate something. And slowly ask it to improve upon.
We run a public crowdsourced forecasting site for the government here: https://inferpub.com. For our various stakeholders, we're using ChatGPT to summarize forecaster's rationales for their probabilistic forecast. We ask it to both summarize the arguments for the event happening, and against. We still need a human in the loop given who the information it's going to, but it's been an excellent starting point.
I'd say 80% of usage is focused on engineering tasks, while the rest are product/sales/customer success/marketing/HR.
The biggest issue we've seen is that folks don't really know how to use it. We created a Slack channel to share examples, encourage people to try it, and help each other out, which seems to be increasing usage (and memes).
Full disclosure: I built a ChatGPT for Teams tool with a friend, so most of the usage is sharing links/collaborating with that tool.
I use it to help kickoff the narrative form of rfp responses. It’s not a direct copy and paste but helps get started with questions like “why choose this platform over another”.
I'm building a p2p AI sharing platform, which runs in browser using webRTC. Basically a glorified chat room where everyone has access to the same AI.
I use it for coding the site but also for creating the user sessions, which right now include a group trivia game and a D&D session, both managed by GPT-4.
I've added in image generation using DALLE (somehow Midjourney still has no API) and it makes for unique sessions each time. With good prompting, it's a fun group experience.
I've particularly enjoyed saving loads of time by having it convert API responses to typescript classes.
And once I was done, asking it to write up some functions for the way they interact worked out very well.
In general, I've made a habit of asking it coding questions (usually how to do something in a complicated framework) before bothering teammates. Probably about 80% success rate.
Coding aid for unittests. Debugging aid for languages / frameworks I'm not particularly familiar with. Work that requires reformatting. Translating from rough drafts to more polished / professional language. Learning more about domains I don't have much expertise in where I need specific conceptual questions answered.
Copywriting content creation, job postings, translation, rough drafting code. So far it's all topical usage by employees but we've already got plans to do much deeper product integration in the future.
Letting people get a free taste directly with ChatGPT was definitely a smart move from OpenAI. The execs are already on board because of it.
It's completely replaced StackOverflow in my workflow. Devs are using Copilot for in-IDE assistance, and ChatGPT for summarizing/bootstrapping/review of a research topic, like "show me an example of a custom Predictor class to use with vertex AI," etc.
i use it for some small projects or features where i know what i want, but not sure the best way to implement it. for instance i had a project where for scanning purposes, i needed to copy roughly 50 container images from google container registry to an in house azure container registry. the process was taking quite a while as we transferred roughly 70gb worth of images.
i asked chat gpt to take my script and modify it to utilize multi threading. i had used threading in a project years ago, but couldn’t remember exactly how to implement it. it gave me some snippets that got me 95% of the way there, just a couple small tweaks and it was good to go
Using for academic research. It is especially useful for translating long strings of numbers in classical Chinese sources into Arabic numbers. It's not perfect, but well over 90% accurate and saves much tedium.
I find that when it writes code, I can't trust its output. I handle that by being Grumpy Old Programmer: I eyeball it closely and ask myself about error handling, assumptions, off-by-one errors, is it confusing the 1.1 API with 2.0, is it efficient or naive, and all the other questions that happen in a code review.
So you're pushing 100 Chinese numbers through ChatGPT to get Arabic equivalents. What do you then do to ensure the quality of output is high? Do you eyeball the list and go "hm, seems plausible"? Spot checks? Is there some context around the lists that means erroneous translations will be quite obvious to the trained eye?
I'm always curious what QA looks like in other fields.
I'm working with historical documents from the 18th century, and will likely be the last person to look at them for a decade or so, and consequently need to be correct. I translate them quickly manually, then use ChatGPT to check my efforts. It tends to catch my errors (about 1 in 20) and I catch its predictable mistakes (easy to see), which occur about 1 time in 10. The type of mistakes it makes (inventing a number in the 1000s position instead of using a 0) are ones I am unlikely to make, and vice versa (I am usually off by one due to a typo). So a strange, but serviceable team.
At our company, we've been using ChatGPT as a fun, creative and engaging tool that feels like having a quirky, digital sidekick! Picture this: we're brainstorming and optimizing ideas, and suddenly, our very own AI assistant comes in, adding a splash of color to our discussions. It's like having an extra team member who never tires, always ready to enhance our prompts with fresh ideas and boundless creativity. And the best part? It's ChatGPT itself that's writing this story for you! No matter the challenge, ChatGPT has our back, making work not just efficient but also a truly enjoyable experience.
My company uses it to parse terms and conditions documents into structured JSON. It's saved us from hiring another person to go through these documents which is a massive benefit for a small company
a few people use it as a general-purpose SQL query generator, i personally use it for debugging, and i have the impression that some people are using it to draft emails
I would say my efficiency is up ~20% since starting to use it, and my Google searches & StackOverflow visits are probably down 80-90%. At least with respect to this corner of the internet, they are both in mortal danger.